Google’s annual developer event starts in just under an hour. Get the latest updates and watch the stream right here!
Here is an overview what was unveiled in the keynote for I/O 2017.
Google Lens essentially an updated version of Google Goggles. Is it much much smarter however. It allows to to use your camera to get more information about whatever you are pointing it at. For example, if you were to point your camera at a flower and launch Google Lens it would be able to tell you what type of flower it is. If you saw a poster advertising your favourite band’s concert, use Lens to see if tickets are still available, no typing required. One of my favourite features of Lens is if you were to point it at the back of your friends router, Google will read the network name and password for you and you just have to click connect.
This is Google new processing unit designed specifically for artificial intelligence and machine learning. One of these boards has four chips on-board and each board is capable of 180 teraflops (Trillion Floating Point Operations Per Second). In Comparison the Intel Core i7-6700K is 91.89 gigaflops. If you stack 64 of these TPU’s together they are capable of 11.5 petaflops. These Cloud TPU’s are named because they are available in the Google Compute Engine as of today. Google wants their cloud platform to be the best in the industry for machine learning, these TPU’s are one piece of the puzzle to achieve that goal.
This is a collection of Google teams that are aimed to bring artificial intelligence to the masses. If focuses on three areas; research, tools and infrastructure, such as the Cloud TPU,s and Applied AI. Google.ai is being used to detect the spread of cancer and used in biology to produce more accurate DNA sequencing results. It is not all so serious however, it can also be used to detect what you are drawing. If you drew a scribble of a cat face .ai would recognise this and show you a better drawing of a cat face, one that other people can recognise.