The recognition of object categories is one of the most challenging problems in the computer vision field, particularly in food image recognition, one of the promising applications.
Cooking and food are connected to many essential links of human life (wellness, health, products and bio-environment, ecology). Practices largely followed the transformation of the digital society, and many websites developed specific search engines and services for cooking recipes, with users participating.
There is a large room for embedding image analysis in the field of Food-tech, especially for retrieving images illustrating recipes. Food category classification is also a key technology.
The solution is an application (a web search engine) dedicated to retrieve, filter and classify images of recipes. It is tailored to run on a conventional mobile phone, trained to recognize food items (from a photo of the meal taken by the user) and suggest the most relevant recipes.
The project relies on several technologies :
•Eye-tracking interactive learning (data annotation & selection of relevant regions for image recognition)
•Deep-learning and new bio-inspired representations (biologically inspired networks that attempt at mimicking the primal areas of the virtual cortex).
•Web filtering for food annotation