Blog Posts Process Analysis

The Future of AI: Meet the Multi-Tasking Machines

Blog: Enterprise Decision Management Blog

FICO 25 years of AI and machine learning logo

To commemorate the silver jubilee of FICO’s use of artificial intelligence and machine learning, we asked FICO employees a question: What does the future of AI look like? The post below is one of the thought-provoking responses, from Chahm An, a lead analytic scientist at FICO, working in San Diego.

Artificial intelligence, like human intelligence, uses observations from the past to learn a general model that can be used to make predictions about future similar occurrences. The future I see for AI is based on current work being done in this field, and grounded in what I saw 20 years ago, when I first began to study AI.

What’s Really Accelerating AI?

In 1997, and IBM’s Deep Blue had just defeated reigning world champion Garry Kasparov at the game of chess in what was seen as a landmark event of artificial intelligence surpassing a human champion at an intellectual challenge. Automated speech recognition systems were beginning to replace touchtone menus in commercial applications. The human genome project was nearing completion, and dot-coms were popping up like weeds.

Conventional wisdom on the state of AI at the time was that while some tasks that appeared to be easy for humans, such as driving a car, were extremely difficult for computers to learn, other tasks that were difficult for humans such as playing chess at a grandmaster level could easily be achieved by brute force branching computation.

These brute force optimizations were not feasible for problems with many more possible outcomes, as found in more complex games such as Go, or other tasks such as computer vision on natural language processing. Speech applications were thus limited to menu interfaces that limited choices and optical character recognition was similarly limited in scope.

The key development that most would attribute to overcoming this common challenge is the development of neural network technology that has allowed us to train models that help machines make complex decisions at a higher, more abstract level similar to the way the brain functions. However, this isn’t the complete story – after all, as we know neural networks have been in use by FICO for 25 years, so they can hardly be considered a new development.

More accurately, what has changed is that the cost to train neural networks has decreased dramatically, allowing the development of complex networks to be feasible and further accelerating the development of neural network technologies, making them more efficient and accurate. Geoffrey Hinton, a leader in the field of machine learning, has humorously noted that the training of deep belief networks had become feasible due to a 100,000-fold speed increase in training that could be partly attributed to new, more efficient algorithms, but mostly due to the fact that computers had become 1,000 times faster in the 15 years of stagnation since developing the building blocks.

Although Moore’s law is appearing to slow down, I would predict that as more focus gets put toward developing hardware specialized for machine learning, and if research of more efficient training algorithms continues on its trend, the cost to perform machine learning tasks in 2027 should be roughly 0.1% what it is today. This means that models that would take years to train with today’s technology would take less than a day, and learning tasks that currently require resource only available to supercomputers will be feasible on everyday mobile consumer devices.

How will this ability manifest in our lives? Here are three predictions.

Prediction 1: Dynamic Learners – Everywhere, All the Time

Despite the hype and fanfare surrounding deep learning, most of these advanced neural network architectures are stuck in a box. The latest deep convolutional nets can correctly identify obscure breeds of dogs better than the average human, but they’re highly optimized towards strictly defined snapshots as inputs and are limited to a pre-defined set of classifications.

With greater availability of computational resources and data, I believe that the trend will move from deep architectures with a single snapshot input and single classification output to much more complex, deep recurrent networks that take in multiple streams of varying input and offer a multitude of varying output types. Instead of static image classification, a computer vision system may work with two continuous streams of binocular video similar to what we process as humans. Not only will it identify that a dog is a beagle, but also that the dog is taking a walk, and that the lady who’s taking it on a walk looks kind of like Meryl Streep.

Since we’ve got supercomputers full of detectors in our pockets, this continuously streaming process will also be learning on the job as well. While learning tends to be a large, computationally expensive batch job performed on a server in today’s world, this is more likely to be shifted to end devices to some extent, both to distribute costs and to make each device adapted to its unique environment. My phone might ask me if I was interested in the price of dog leashes, or whether I think Meryl Streep lady looks cute, and continuously build a profile on my preferences.

Prediction 2: Generalized Learners – Not Just One-Trick Ponies

I believe that there will be a significant trend towards intelligent systems that do well in more than a single task. The same neural network that is designed to filter out spam emails may also reuse its knowledge to detect phishing attempts, prioritize your inbox by importance, and help you draft responses to common requests. Again, this follows the trend of multiple inputs, multiple outputs, but the result will be that we have more robust systems that are capable of understanding abstract concepts the way that we do.

To achieve this robustness, we will probably see multiple types of learners interacting with each other – perhaps a self-contained Bayesian network may process text in an unsupervised fashion to provide feedback to both a recurrent neural network and a random forest classifier to form a consensus opinion, all within a reinforcement learning system. With a glut of unlabeled data to work with, we are also more likely to see more unsupervised learning of generative models that are able to understand the underlying distribution of variables of interest, rather than the discriminative models that are currently popular with supervised learning models.

The result of these generalized learners will be intelligent systems that are not just optimized to some pre-defined objective, but actually have some degree of expertise in an area of knowledge. Instead of getting a binary decision maker, we may get systems that can explain their judgement in terms that are easy to understand and adapt to more customized objectives that we can have greater confidence in.

Prediction 3: Mixed Reception, But Gradual Trust

As seen with the issues surrounding self-driving cars, there is likely to be a great deal of resistance to certain advances in AI systems. A computer that can defeat Lee Sedol at a game of Go seems innocuous enough, but are we ready to trust an artificial intelligence with deadly force? Do we trust smart devices and their manufacturers to listen in to every occurrence in our daily lives? Are we afraid that computers will become better at our jobs than we are and render us unemployed?

I believe that this apprehension will be the biggest challenge to the advancement of AI in the next decade. There will likely be legislation put in place to further increase the privacy, job security and safety of the consumers who would benefit most from future advances in AI.

However, progress appears to be inevitable, as we already count on technology for so much nowadays. Who uses a physical map to navigate anymore, for example?

Perhaps artificial intelligence systems will need to learn to become experts at PR and marketing before moving on to the next stage of adoption.

Making Progress Today

As I was writing this blog post, I realized that FICO already does most of the things that I have predicted to become mainstream for artificial intelligence. We are already great at learning continuous customer profiles over time, auto adaptation of models, and providing reasons along with scores. This speaks to the vision that put FICO ahead of the game 25 years ago, but will continue to keep us ahead in the future of AI.

See other FICO posts on artificial intelligence.

The post The Future of AI: Meet the Multi-Tasking Machines appeared first on FICO.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/the-future-of-ai-meet-the-multi-tasking-machines/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×