“Mobile learning” or “mLearning” is learning on a mobile electronic device, such as smartphones or tablets. Among its great advantages, we find easy access at any time and place, as well as the inclusion of content developed through microlearning.
En the Google Developers Summit 2019, Jingtao Wang, Research Scientist at Google, presents the future and challenges that mobile learning faces, showcasing case studies on smartwatches and how to monitor them using artificial intelligence. Additionally, he discusses physiological studies on students’ learning.
Jingtao Wang is a research scientist at Google, specializing in intelligent mobile interfaces, machine learning, and educational technology. Wang has received various accolades, including the Microsoft Azure for Research Award, Google Faculty Research Award, and ACIE Innovation in Education.
Hello everyone!
My name is Jingtao Wang, and I am a researcher at Google. Before joining Google full-time, I was a researcher at the University of Pittsburgh.
I have two main lines of research. The first one is machine learning on electronic devices, and the second one is the application of machine learning in the context of education and human learning. Today, I would like to share some of my ongoing projects with you.
I am working on projects related to my two interests.
As we all know, human learning is one of the challenging and rewarding tasks. Imagine the opportunities that can open up if a person learns a new language, learns to play a new musical instrument, or understands the usability of emerging technologies like TensorFlow.
At the same time, learning remains equally challenging and important globally. According to surveys, more than 10 million people are registered in MOOC courses such as Coursera, but less than 7% of its users complete them. This means that learning something new is not trivial or easy.
Additionally, people need to learn not only in classrooms but also in other informal environments.
Today, I would like to show you some examples of how to use machine learning, using a digital tool on your wrist, in the palm of your hand, while walking… I’m talking about the smartwatch.
SmartRSVP
The first project I will show you is called SmartRSVP, and the idea is how we can turn your smartwatch into an intelligent learning device.
The first challenge was addressing the small size of its screen. We know that a typical smartwatch has a 1.5-inch screen and will only display two or three words at a time, making it difficult to read a sentence.
So, how can we enable effective learning?
It depends on the task. What if we take small samples and break the task into bite-sized pieces? Like the ones we enable through embedded learning on wristwatches that will notify timely reminders or provide prompts or checks on whether you understand the displayed content.
Another point we addressed is how to read long sentences on such a small watch.
The solution is a technology called RSVP, which shows one word at a time sequentially. Each word can be larger and displayed faster, and it can also be read without moving your gaze.
Here, we present two technologies to facilitate this type of rapid reading on smartwatches. The first is that we detect your attention; we only show the next word when you look at the screen, meaning we pause the screen when you look away.
The second is that we monitor the user’s heart rate in real-time to gauge their attention, and if it increases, we test at a faster pace.
Interaction in large classrooms
The other project I will share with you is about improving interaction in large classrooms.
We know that large classes are quite popular in undergraduate classes; it’s not uncommon to have a class with more than 200 students. How can we improve interaction in such an environment? The smart mobile app called Kazmir has three important features for this need:
The first feature is that it will automatically send a reminder after each lecture, asking the student about the content, applying some questions or if they have any doubts about something they would like to know more about or clarify with their instructor. This will naturally have the effect of generating dialogue that allows for deeper and more specific questions.
This system has been implemented in classrooms in the U.S. and abroad, with over 400 students using the pilot system that I have been working on for over five years.
In the case of MOOCs, Massive Open Online Courses, learning more efficiently on their mobile devices has two components:
Using a camera to track the students’ facial expressions and using the rear camera to track physiological signals, which, in this case, are difficult due to the variability of students taking them. We need to know well what students are learning when watching course videos and pay special attention to when they get distracted.
For this, we use some adaptive intervention techniques, such as ensuring the student is attentive and calm while watching the class.
Living Jiagu: Learning about language creation and evaluation with artificial intelligence
Now I will present the project that excites me the most, language learning with artificial intelligence. This project explores the intersection of cave art and artificial intelligence.
In this image, we see the language called “Oracle Bone Script” on a turtle shell, which is the origin of the Chinese language and was used by people 3,000 years ago.
This unique character language is a predecessor to most other Asian languages. When it died out, a new language emerged: modern Chinese, whose origin and evolution are in this shell.
Understanding the origin of a language can help us have a deeper understanding of languages, creating an interactive exhibition and a higher level for users who experience learning a new language with everyday objects.
Here are some technical details:
We use a multi-layer conversion of the Hercules neural network, similar between the created character and the message in the image displayed on the screen. At the same time, we also use an algorithm to determine, for example, if it was the correct meaning of each part of the character you created, and then it is animated, trying to build a connection between each object and the meaning that is the representation of the Asian language by knowing the connection.
This project was launched at Google in China in 2019 with more than 1000 participants who experienced this demonstration, including two parts, the first for large touch screens.
As a task, it is about imitating the Chinese ancestors who created hieroglyphics and then allowed the creation of characters that help you animate and compose scenes on a screen. We can build a connection between everyday objects and the corresponding meaning of those objects with our learning.
Machine learning can develop support and knowledge more efficiently and engagingly because learning does not necessarily happen in classrooms; it can also happen on your wrist or in the palm of your hand.
Thank you!