Loading...

Google noodle: Data scientist uses internet giant’s ML to identify ramen servings

Google noodle: Data scientist uses internet giant’s ML to identify ramen servings
Loading...

Kenji Doi, a ramen aficionado and data scientist from Japan, has used internet giant Google’s machine learning algorithms to train an artificial intelligence system, which identifies noodle bowls and its contents from across shops.

Kaz Sato, a developer advocate from Google’s Cloud division, said in a blog post that Doi has used “machine learning models and AutoML Vision to classify bowls of ramen and identify the exact shop” from among 41, with 95% accuracy.

While emphasising Doi’s incredible feat, Sato said the identification process can be difficult if someone is new to noodles.

Loading...

According to Sato, Doi had already built a machine learning model to classify ramen, but wanted to see if AutoML Vision could do it more efficiently. Explaining AutoML, Sato said that the algorithm is capable of creating customized ML models automatically – to identify animals in the wild, recognise types of products to improve an online store or, in this case, classify ramen.

“You don’t have to be a data scientist to know how to use it – all you need to do is upload well-labelled images and then click a button. In Doi’s case, he compiled a set of 48,000 photos of bowls of soup from Ramen Jiro locations, along with labels for each shop, and uploaded them to AutoML Vision. The model took about 24 hours to train, all automatically. Although a less accurate, basic mode was ready in just 18 minutes. The results were impressive: Doi’s model got 94.5% accuracy on predicting the shop just from the photos.”

Google claims that AutoML Vision is designed for people without ML expertise, but it can speed things up dramatically for experts. “With AutoML Vision, a data scientist would not need to spend a long time training and tuning a model to achieve the best results. This means businesses could scale their AI work even with a limited number of data scientists,” Doi was quoted as saying by Sato.

Loading...

Sato also explained how Doi’s ML detects the differences in ramen. Doi’s first hypothesis was that the model was looking at the colour and shape of the bowl, or table. However, to achieve 95% accuracy his new model was trained to detect very subtle differences between cuts of the meat, or the way toppings were served.

Rainforest Connection, which is looking to preserve forests, uses Google’s open-source machine learning framework TensorFlow to catalyse their efforts. Topher White, founder and CEO of Rainforest Connection, said that his team has built a one-of-a-kind scalable, real-time detection and alert system for logging and environmental conservation in the rainforest.

White’s team has managed to hide modified smartphones powered with solar panels, called Guardian devices, in trees across threatened areas, and continuously keep monitoring the sounds of the forest, sending the audio to its cloud-based servers over the standard, local cell-­phone network.

Loading...

“Once the audio is in the cloud, we use TensorFlow, Google’s machine learning framework, to analyse all the auditory data in real-time and listen for chainsaws, logging trucks and other sounds of illegal activity that can help us pinpoint problems in the forest.”

“Audio pours in constantly from every phone, 24 hours a day, every day, and the stakes of missed detections are high. That is why we have come to use TensorFlow, due to its ability to analyse every layer of our data-heavy detection process,” said White.

“The versatility of the machine learning framework empowers us to use a wide range of AI techniques with deep learning on one unified platform. This allows us to tweak our audio inputs and improve detection quality. Without the help of machine learning, this process would have been impossible. When fighting deforestation, every improvement can mean one more saved tree,” he added.

Loading...

Sign up for Newsletter

Select your Newsletter frequency