Lifelong Learning Will Power Next Generation Of Autonomous Devices


Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!


Applications as diverse as delivery drones, self-driving cars, industrial robots and extraplanetary rovers will depend on this emerging field

Developers must overcome strict limits to size, power and model flexibility to enable on-device learning in real time, but new research and new design guidelines may help.

Look up ​lifelong learning” online, and you’ll find a laundry list of apps to teach you how to quilt, play chess or even speak a new language.

Within the emerging fields of artificial intelligence (AI) and autonomous devices, however, ​lifelong learning” means something different — and it is a bit more complex. It refers to the ability of a device to continuously operate, interact with and learn from its environment — on its own and in real time.

This ability is critical to the development of some of our most promising technologies — from automated delivery drones and self-driving cars, to extraplanetary rovers and robots capable of doing work too dangerous for humans.

To create devices that can truly learn in real-time, we will need breakthroughs spanning algorithm design, chip design and novel materials and devices. It’s an extremely exciting time for the entire lifelong learning ecosystem.” —Angel Yanguas-Gil, principal materials scientist at Argonne

In all these instances, scientists are developing algorithms at a breakneck pace to enable such learning. But the specialized hardware AI accelerators, or chips, that devices need to run these new algorithms must keep up.

That’s the challenge that Angel Yanguas-Gil, a researcher at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, has taken up. His work is part of Argonne’s Microelectronics Initiative and is funded by Argonne’s Laboratory Directed Research and Development program. Yanguas-Gil and a multidisciplinary team of colleagues recently published a paper in Nature Electronics that explores the programming and hardware challenges that AI-driven devices face. And how we might be able to overcome them through design.

Learning in real time

Current approaches to AI are based on a training and inference model. The developer ​trains” the AI capability offline to use only certain types of information to perform a defined set of tasks, tests its performance and then installs it onto the destination device.

At that point, the device can no longer learn from new data or experiences,” explains Yanguas-Gil. ​If the developer wants to add capabilities to the device or improve its performance, he or she must take the device out of service and train the system from scratch.”

For complex applications, this model simply isn’t feasible.

Think of a planetary rover that encounters an object that it wasn’t trained to recognize. Or it enters terrain it was not trained to navigate,” Yanguas-Gil continues. ​Given the time lag between the rover and its operators, shutting it down and trying to retrain it to perform in this situation won’t work. Instead, the rover must be able to collect the new types of data. It must relate that new information to information it already has — and the tasks associated with it. And then make decisions about what to do next in real time.”

The challenge is that real-time learning requires significantly more complex algorithms. In turn, these algorithms require more energy, more memory and more flexibility from their hardware accelerators to run. And these chips are nearly always strictly limited in size, weight and power — depending on the device.

Keys for lifelong learning accelerators

According to the paper, AI accelerators need a number of capabilities to enable their host devices to learn continuously.

The learning capability must be located on the device. In most intended applications, there won’t be time for the device to retrieve information from a remote source like the cloud or to prompt a transmission from the operator with instructions before it needs to perform a task.

The accelerator must also have the ability to change how it uses its resources over time in order to maximize use of energy and space. This could mean deciding to change where it stores certain types of data, or how much energy it uses to perform certain tasks.

Another necessity is what researchers call ​model recoverability.” This means that the system can retain enough of its original structure to keep performing its intended tasks at a high level, even though it is constantly changing and evolving as a result of its learning. The system should also prevent what experts refer to as ​catastrophic forgetting,” where learning new tasks causes the system to forget older ones. This is a common occurrence in current machine learning approaches. If necessary, systems should be able to revert to more successful practices if performance begins to suffer.

Finally, the accelerator might have the need to consolidate knowledge gained from previous tasks (using data from past experiences through a process known as replay) while it is actively completing new ones.

All these capabilities present challenges for AI accelerators that researchers are only starting to take up.

How do we know it’s working?

The process for measuring the effectiveness of AI accelerators is also a work in progress. In the past, assessments have focused on task accuracy to measure the amount of ​forgetting” that occurs in the system as it learns a series of tasks.

But these measures are not nuanced enough to capture the information that developers need to develop AI chips that can meet all the challenges required for lifelong learning. According to the paper, developers are now more interested in assessing how well a device can use what it learns to improve its performance on tasks that come before and after the point in a sequence where it learns new information. Other emerging metrics aim to measure how fast the model can learn and how well it manages its own growth.

Progress in the face of complexity

If all of this sounds exceptionally complex, well, it is.

It turns out that in order to create devices that can truly learn in real-time, we will need breakthroughs and strategies spanning from algorithm design to chip design to novel materials and devices,” says Yanguas-Gil.

Fortunately, researchers might be able to draw on or adapt existing technologies originally conceived for other applications, such as memory devices. This could help realize lifelong learning capabilities in a way that is compatible with current semiconductor processing technologies.

Similarly, novel co-design approaches that are being developed as part of Argonne’s research portfolio in microelectronics can help accelerate the development of novel materials, devices, circuits and architectures optimized for lifelong learning. In their Nature Electronics paper, Yanguas-Gil and his colleagues provide some design principles to guide development efforts along these lines. They include:

  • Highly reconfigurable architectures, so that the model can change how it uses energy and stores information as it learns — similar to how the human brain works.
  • High data bandwidth (for rapid learning) and a large memory footprint.
  • On-chip communication to promote reliability and availability.

The process of tackling these challenges is just getting started in a number of scientific disciplines. And it will likely require some very close collaboration across those disciplines, as well as an openness to new designs and new materials,” explains Yanguas-Gil. ​It’s an extremely exciting time for the entire lifelong learning ecosystem.”

Part of this material is based on research sponsored by the Air Force Research Laboratory. In addition to Yanguas-Gil, authors contributing to this research include Dhireesha Kudithipudi, Anurag Daram, Abdullah M. Zyarah, Fatima Tuz Zohora, James B. Aimone, Nicholas Soures, Emre Neftci, Matthew Mattina, Vincenzo Lomonaco, Clare D. Thiem and Benjamin Epstein.

Argonne Tandem Linac Accelerator System

This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of Nuclear Physics, under contract number DE‐AC02‐06CH11357. This research used resources of the Argonne Tandem Linac Accelerator System (ATLAS), a DOE Office of Science User Facility.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

Courtesy of Argonne National Laboratory. By Michael Kooi


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Latest CleanTechnica TV Video


I don’t like paywalls. You don’t like paywalls. Who likes paywalls? Here at CleanTechnica, we implemented a limited paywall for a while, but it always felt wrong — and it was always tough to decide what we should put behind there. In theory, your most exclusive and best content goes behind a paywall. But then fewer people read it!! So, we’ve decided to completely nix paywalls here at CleanTechnica. But…

 

Like other media companies, we need reader support! If you support us, please chip in a bit monthly to help our team write, edit, and publish 15 cleantech stories a day!

 

Thank you!


Advertisement



 


CleanTechnica uses affiliate links. See our policy here.




Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img