15/05/2021

DSRN Blogs

Discover the world with DSRN Blogs

The Complexity of Artificial Intelligence

Artificial Intelligence, or AI, makes us look better in selfies, obediently tells us the weather once we ask Alexa for it, and rolls out self-drive cars. it’s the technology that permits machines to find out from experience and perform human-like tasks.

As a whole, Artificial Intelligence contains many subfields, including linguistic communication processing, computer vision, and deep learning. Most of the time, the precise technology at work is machine learning, which focuses on the event of algorithms that analyzes data and makes predictions, and relies heavily on human supervision.

SMU professor of data Systems, Sun Qianru, likens training a small-scale Artificial Intelligence model to teaching a young kid to acknowledge objects in his surroundings. “At first a child doesn’t understand many things around him. He might see an apple but doesn’t recognize it as an apple and he might ask, “Is this a banana?” His parents will correct him, “No, this is often not a banana. this is often an apple.” Such feedback in his brain then signals to fine-tune his knowledge.”
Professor Sun’s research focuses on deep convolutional neural networks, meta-learning, incremental learning, semi-supervised learning, and their applications in recognizing images and videos.

Training an Artificial Intelligence model

Because of the complexity of AI, Professor Sun ventures into general concepts and current trends within the field before diving into her research projects.

She explains that supervised machine learning involves models training itself on a labeled data set. That is, the info is labeled with information that the model is being built to work out , which which can even be classified in ways the model is meant to classify as data. for instance , a computer vision model designed to spot an apple could be trained on a knowledge set of varied labeled apple images.

“Give it data, and therefore the data has labels,” she explains. “An image could contain an apple, and therefore the image goes through the deep Artificial model and makes some predictions. If the prediction is true , then it’s fine. Otherwise, the model will get computational loss or penalty to backpropagate through to switch its parameters. then the model gets updated.”

Currently, the state-of-the-art or best performing Artificial Intelligence models are most supported deep learning models, Professor Sun observes. In deep learning, the model learns to perform recognition tasks from images, text, or sound supported the deep neural network architectures that contain many layers. If the input is a picture , for instance , the idea is that the image are often described by different spatial scales or layers of features.

Professor Sun illustrates: “Take my face, for instance . The features that distinguish me from people are my eyes, my nose, my mouth as local features, and my face shape and complexion as global features. For identification, I can use these features to mention , “This is me.'” For a machine model, it encodes such local and global features in its different layers and thus can do an equivalent identification.

Training AI models require tons of knowledge for accurate recognition. If an Artificial Intelligence model has just one image of an individual’s face, it makes mistakes recognizing that person because it doesn’t see the opposite countenance that distinguishes that person from those of another, she argues. “Appearances have differences and Artificial Intelligence depends on a highly diverse data set so as to find out all the differences of the image.”

Health Promotion Board app

One of the projects that Professor Sun is functioning on is Food AI++, an app for Singapore’s Health Promotion Board (HPB). Users are ready to determine food composition data just by taking pictures of the food they’re eating with their phones. The aim of the app is to assist users track nutrition of the food they consume and use the knowledge to realize a healthy, well-balanced diet.

Professor Sun and her team collect data of the pictures that users take of their meals and upload them to the app. The observation is that food images are very noisy and diverse, reflecting different cultures.
“Chinese and Malays in Singapore, for instance , have different eating habits, food styles, and different categories of food,” she clarifies. “When we train a model, we start with a limited list of categories, except for the food app we found that we had to expand the categories all the time within the Application Programming Interface, or API. we’ve to constantly modify and update the info set. The rich cultural diversity in Singapore is one among the most important challenges during this project.”

Besides collecting more diverse data, the team is additionally performing on domain adaptation learning algorithms. With different cultures, there are different domains in order that they need to believe the way to quickly adapt their pre-trained models to them by leveraging effective learning algorithms. to try to to this for food images, they have to develop food-specific domain adaptation algorithms. They also got to believe including food knowledge to enhance the general efficiency of multi-domain models.
“We want to try to to this adaptation by employing a small data set within the new domain,” Professor Sun says. “It’s a challenging task, and it might benefit Singaporean users from different cultures.”

FANN in AME

Professor Sun is currently within the early stages of a three-year project called “Fast-Adapted Neural Networks (FANN) for Advanced Artificial Intelligence Systems.” The project, which is funded by the Agency for Science, Technology and Research (A*STAR) under its Advanced Manufacturing and Engineering Young Individual Research Grant (AME YIRG), focuses on computer vision like image processing, image recognition, or object detection in video. Computer vision algorithms usually believe convolutional neural networks, or CNNs which is her area of experience .
“The key hypothesis of the research is that it’s possible to create the reasoning level of model adaptation supported statistical-level knowledge learning,” Professor Sun explains. “By validating this hypothesis, we also are approaching the goal of advanced Artificial Intelligence systems that train machine models with human-like intelligence for the applications in AME domains.”
The research aims to realize high robustness and computational efficiency in automated visual inspection, and interdisciplinary knowledge between precision manufacturing and advanced image-recognition techniques. Professor Sun is confident the outcomes of the research will greatly improve the yield rate and reduce manufacturing costs when the fast-adapted inspection devices are widely installed within the design, layout, fabrication, assembly, and testing processes of production lines.