Use of Transfer Learning for Automatic Dietary Monitoring through Throat Microphone Recordings

Speaker: M. A. Tugtekin Turan

Date: October 24, 2019 at 10:30 – B013


Wearable devices and technologies in healthcare have been accelerating the development and integration of modern engineering approaches. Dietary monitoring is one challenging application among other healthcare services and typically performed over personal recordings. However, manual logging is highly biased and unreliable because individuals tend to underestimate their food intake. Alternatively, automatic dietary monitoring (ADM) presents an intelligent wearable coaching solution. In this presentation, an ADM system with a throat microphone (TM) food intake sound recordings is introduced to detect chewings and swallowings. This sensor is a non-invasive transducer mounted on the neck and capable of delivering robust signal recordings for intelligent food intake monitoring. Using transfer learning paradigm, an improved food intake detection and classification system is designed. Although labeled food intake data recordings with acoustic close-talk microphones (CM) are abundant, TM data is scarce. This creates a bottleneck to sufficiently train deep architectures using TM data. To this effect, a new domain adaptation framework in a heterogeneous setup based on the teacher/student (T/S) learning paradigm is proposed. In this framework, the teacher network is trained over abundant high quality CM recordings, whereas the student network receives TM recordings as input and distills deep feature extraction capacity of the teacher over a parallel CM and TM dataset. This allows the use of a significantly larger set of adaptation data, adds robustness to the resulting model, and significantly improves the detection and classification performance of food intake events through TM recordings.