PETAL : Learning the Android Way

by Taha Sabih | Photo Credits: Taha Sabih | 11 November 2013

It has been widely accepted that technology now plays a very important role in education delivery and retention. In the Department of Electrical and Electronic Engineering (EEE), Dr. Vincent Tam and Dr. Edmund Lam have been doing extensive research on e-learning deliverables. They discussed with us their projects. A light and fun one called PETAL over the summer; and a larger-scaled research COMPAD+ which has been applied in the learning of all EEE students since last year.

PETAL

Dr. Tam has been heading the research on e-learning in the faculty, and this past summer was joined by some interns from MIT. Dr. Lam had been in contact with MISTI (MIT International Science and Technology Initiatives), and got to know that he would be receiving a few interns. He then contacted Dr. Tam to discuss what could be an interesting project for the incoming interns. Together, they came up with PETAL (Personalized Teaching And Learning), an android program to aid learning.

Dr. Lam had research interests in image processing, so the team decided to make use of the camera on a tablet for PETAL. The trick was to figure out how could the camera be used to aid with the objectives (primarily course delivery). Since the camera was the common denominator in portable PCs, it was a good idea to use that since the software developed could be scaled and/or ported to different environments as necessary.

To start, they decided to implement a system which would work on lecture delivery tools. The initial idea was to use the camera to keep ‘an eye’ on the viewer non-invasively.

Dr. Lam clarifies, ‘if the camera can point to you while you are watching an online video, it can extract information about how you respond to the lecture; in more simple terms, if it can detect that you are sleeping, it knows that you got bored.’

They came up with a few scenarios which might be of use. For example, if the camera detects that you are too close to the camera, it might pop-up a warning that you are too close and may be vulnerable to short-sightedness if this behavior continues. It could also detect expressions of being confused, bored or distraction, and pause the video at that point. More importantly, it could be used to extract underlying data about the quality of the lecture.

Dr. Lam mentions one such instance, ‘if the lecture is distributed to 100 students, and 80 of them get bored or confused 10 minutes into the video, then we can send the lecture back to the instructor telling him that maybe you are getting too fast because students are losing sight of what you are trying to say. Hence all these learning analytics information can be extracted non-intrusively without asking the viewers explicit questions.

The technology behind is not new, but this implementation is a very novel idea.

PETAL uses facial recognition algorithms to detect feature changes in expressions. These algorithms were developed with help from OpenCV (Open Source Computer Vision Library), and of course borrowing from the knowledge of Dr. Lam.

Petal wooohoo

PETAL Eye Tracking: Striking a pose

Dr. Tam also mentioned that the two of them have collaborated before. Dr. Tam has extensive experience in e-learning applications. He developed a gesture-recognition system two years ago, which later facilitated the development of mobile applications for the Heep Hong Society, an organization for students with learning deficiencies.

This collaboration brings out the practical aspect of the research done in the department, with many benefits becoming exceedingly clear when deliverables are used.

About the Author

Taha Sabih

Regular Contributor

Third Year, Electrical Engineering and Economics

Author: Taha Sabih

Share This Post On