12 Groundbreaking Deep Learning Advances of 2017 – ParrotGPT
Top Best Deep Learning Breakthroughs Achievements In 2017 Artificial Intelligence

The quest to give machines a mind of their own occupied the brightest AI specialists in 2017. Machine learning (and especially the newly hip branch, deep learning) practically delivered all of the most stunning achievements in artificial intelligence so far — from systems that beat us at our own games to art-producing neural networks that rival human creativity.    

At the onset and in hindsight, experts have heralded 2017 as “The Year of AI”. Here are a dozen deep learning breakthroughs from the past year that validate the claim:

What Are The Biggest Deep Learning Achievements Of 2017?

1. DeepMind’s AlphaZero clobbered the top AI champions in Go, shogi, and chess

Following its stunning win over the best human Go player in 2016, AlphaGo was upgraded a year later into a generalized and more powerful incarnation, AlphaZero. Free of any human guidance except the basic game rules, AlphaZero learned how to play master-level chess by itself in just four hours. It then proceeded to trounce Stockfish (the top AI chess player) in a 100-game match — without losing a single game.

DeepMind AlphaZero Go Training Time

2. OpenAI’s Universe gained traction with high-profile partners 

Aiming for the field’s holy grail (a “friendly” artificial general intelligence), Universe is a free platform where developers can train an AI agent via reinforcement learning across disparate environments such as websites, applications, and games. Released in December of 2016, the platform gained traction in 2017, with partners such as EA, Valve, and Microsoft Studios jumping at the chance to allow Universe AI agents to roam across and learn from their games. 

3.  Sonnet & TensorFlow Eager joined their fellow open-source frameworks

Google launched TensorFlow as an open-source machine learning library in 2015, followed by Magenta (an AI platform for creating art and music) a year later. In 2016, Facebook AI released PyTorch, a Python-based deep learning platform supporting dynamic computation graphs, which Google matched eagerly with the release of Tensorflow Eager. In 2017 and through its AI subsidiary DeepMind, Google released Sonnet, an open-source framework that makes it easier for developers to build neural network components.  

4. Facebook and Microsoft joined forces to enable AI framework interoperability

The tech giants — with the help of partner communities (including AWS, Nvidia, Qualcomm, Intel, and Huawei) — developed the Open Neural Network Exchange (ONNX), an open format for representing deep learning models that also allows models to be trained in one framework and transferred to another for inference.  

5. Unity enabled developers to easily build intelligent agents in games

One of the planet’s leading game development companies built ML-Agents, a platform for AI developers and researchers to leverage Unity simulations and games as customizable environments where they can train intelligent agents using evolutionary strategies, deep reinforcement learning, and other training methods.

6. Machine Learning as a Service (MLaaS) platforms sprout up everywhere

In 2017, every tech giant made a big splash about their Machine Learning as a Service platforms (MLaaS) intended to “democratize AI” by enabling companies less blessed with technical talent to partake in the latest breakthroughs via API. Google’s prediction services rebranded under Google Cloud AI, Amazon expanded awareness & access to voice and NLP platforms like Lex, Polly, and Alexa Skills Kit, while Microsoft and IBM equally touted their own wares.

More and more enterprises have also joined the race to build in-house machine learning platforms and centers of deep learning excellence. Uber has Michelangelo, Facebook has FBLearner Flow, Twitter has Cortex. Capital One and other forward-thinking companies outside the core tech space have also set up their own Center of Machine Learning Excellence

7. The GAN Zoo Continued To Grow

In January 2017, a team of AI researchers published a pivotal paper on Wasserstein GAN (WGAN), a material improvement to traditional GAN (generative adversarial networks) training. WGAN improved learning stability, addressed mode collapse, and improved debugging. In turn, a slew of new GANs, ranging from BEGAN to CycleGan to Progressive GAN flourished. This last approach of progressively training GANs enabled Nvidia to generate high resolution facial photos of fake celebrities.   

8. Who Needs Recurrence Or Convolution When You Have Attention?   

Natural language processing tasks such as speech recognition and machine translation have historically been tackled with neural network architectures with memory components, such as LSTMs. A breakthrough paper  Attention Is All You Need proposed a new model, the Transformer, which dispenses with computationally expensive aspects like recurrence and convolution to achieve state-of-the-art performance on machine translation tasks, at least for English-to-German and English-to-French. While more research is required to see if the Transformer architecture holds up in all use cases, the paper generated tons of buzz in the community and is still ranked as the 4th most popular paper of all time on Arxiv Sanity.

9. AutoML Simplified The Lives Of Data Scientists & Machine Learning Engineers

What makes machine learning “hard”? Many practitioners agree that the data munging and iterative debugging required to get performant models create a significant barrier to entry. Enter AutoML, platforms that automate major chunks of the machine learning pipeline, ranging from data cleaning and preparation, to model parameter search and optimization, to deployment and scaling. Notable solutions include Google’s AutoML (in alpha), Amazon’s SageMakerDataRobot, RapidMiner, H2O.ai’s Driverless AI, and open source Python solutions like TPOT.

10. Hinton Declared Backprop Dead, Finally Dropped His Capsule Networks

Backpropagation serves as the backbone of nearly every notable achievement in deep neural networks, but leading deep learning pioneer Geoffrey Hinton warned that the technique was unlikely to get us to AGI, or artificial general intelligence. His statements send shockwaves through the industry and inspired many debates.  

In the meantime, Hinton had also been hinting at a new network architecture, called Capsule Networks, which he finally dropped in 2017 after much anticipation. Capsule networks overcome many of the limitations of convolutional neural networks which can be easily tricked by both conceptually confusing images and intentional adversarial attacks.

11. Quantum & Optical Computing Entered The AI Hardware Wars

Faster hardware means better AI. Google announced a 2nd generation of their Tensor Processing Units (TPU) designed specifically for AI research, but technically innovative competitors like quantum computing and optical computing are also entering the market. IBM and Google both announced respective milestones in their quantum computing progress, while researchers and engineers discovered that matrix operations used in deep learning can be done in parallel by switching from an electrical computing paradigm to a phototonic one. 

12. Ethics & Fairness Of ML Systems Took Center Stage

At TOPBOTS, we’ve written extensively about the importance of expanding AI education globally and eliminating biased algorithms. This monumental task requires the cooperation of all AI professionals along with diverse communities around the world to ensure that our technology respects all of humanity, not just an elite few.

With the alarming rate of technological progress threatening to eclipse our own understanding of our creations, a number of leading technologists have sounded the alarm and taken action to ensure benevolent AI. Cathy O’Neil, author of Weapons of Math Destruction, called out the need to drop our blind faith in big data. Fei-Fei Li, Stanford Professor and Chief Scientist of Google Cloud AI/ML, expanded AI4ALL, an educational non-profit training the next generation of AI leaders. Kate Crawford and Meredith Whitaker starts AI Now, an interdisciplinary research organization dedicated to studying the social implications of AI. In her excellent keynote from NIPS 2017, Crawford highlighted the many challenges facing machine learning systems today and rallied the community to prioritize ethics, fairness, and safety.

So I think we can agree that 2017 can appropriately be named the “Year of AI”. The question is, what do we call the year 2018, which promises to deliver even more AI marvels than the one before? 

How ParrotGPT Can Help:

ParrotGPT provides state-of-the-art AI chatbot solutions that can revolutionize how businesses engage with customers. With ParrotGPT’s advanced natural language understanding and conversation abilities, businesses can enhance customer support, marketing, and sales, leading to improved operational efficiency and customer satisfaction.

Leave a Reply

Your email address will not be published. Required fields are marked *