Advertisements

TOP MACHINE LEARNING PROJECTS LAUNCHED BY GOOGLE IN 2020 (TILL DATE)

It may be that time of the year when new year resolutions start to fizzle, but Google seems to be just getting started.The tech giant has been building tools and services to bring in the benefits of artificial intelligence (AI) to its users. The company has begun upping its arsenal of AI-powered products with a string of new releases this month alone.

Here is a list of the top products launched by Google in January 2020.

LaserTagger

Although first introduced in 2014, the latest iterations of sequence-to-sequence (seq2seq) AI models have strengthened the capability of key text-generating tasks including sentence formation and grammar correction. Google’s LaserTagger, which the company has open-sourced, speeds up the text generation process and reduces the chances of errors. Compared to traditional seq2seq methods, LaserTagger computes predictions up to 100 times faster, making it suitable for real-time applications. Furthermore, it can be plugged into an existing technology stack without adding any noticeable latency on the user side because of its high inference speed. These advantages become even more pronounced when applied at a large scale.

Coral Accelerator Module & Coral Dev Board Mini

The company has expanded its Coral lineup by unveiling two new Coral AI products — Coral Dev Board Mini and Coral Accelerator Module. Announced ahead of the Consumer Electronics Show (CES) this year, the latest addition to the Coral family followed a successful beta run of the platform in October 2019. Compared to traditional seq2seq methods, LaserTagger computes predictions up to 100 times faster, making it suitable for real-time applications. Furthermore, it can be plugged into an existing technology stack without adding any noticeable latency on the user side because of its high inference speed. These advantages become even more pronounced when applied at a large scale.

Meena

Chatbots are one of the hottest trends in AI owing to its tremendous growth in applications. Google has added to the mix with its human-like multi-turn open-domain version. Meena has been trained in an end-to-end fashion on data mined from social media conversations held in the public domain with a totalling 300GB+ text data. Furthermore, it is massive in size with 2.6B parameter neural network and has been trained to minimize perplexity of the next token.

Furthermore, Google’s human evaluation metric called Sensibleness and Specificity Average (SSA) also captures the key elements of a human-like multi-turn conversation, making this chatbot even more versatile. In a blog post, Google had claimed that Meena can ‘conduct conversations that are more sensible and specific than existing state-of-the-art chatbots.’

Reformer

Plugged as an important development of Google’s Transformer — the novel neural network architecture for language understanding — Reformer is intended to handle context windows of up to 1 million words, all on a single AI accelerator using only 16GB of memory.

Google had first mooted the idea of a new transformer model in a research paper in collaboration with UC Berkeley in 2019. The core idea behind this model was self-attention, and the ability to attend to different positions of an input sequence to compute a representation of that sequence — elaborated in one of our articles.

Today, Reformer can process whole books concurrently and that too on a single gadget, thereby exhibiting great potential.

Google has time and again reiterated its commitment to the development of AI. Seeing it as more profound than “fire or electricity”, it firmly believes that this technology can eliminate many of the constraints we face today.

The company has also delved into research anchored around AI that is spread across a host of sectors, whether it be detecting breast cancer or protecting whales or other endangered species.

Reference Site

Advertisements

Leave a comment

Leave a Reply

%d bloggers like this: