Inductive Transfer : 10 Years Later
NIPS 2005 Workshop
Organizers: Danny Silver, Goekhan Bakir, Kristin Bennett, Rich Caruana, Massimiliano Pontil, Stuart Russell, Prasad Tadepalli


Home

Overview

Call for participation

Important Dates

Participants

Schedule

Accepted Papers

Resources

Sponsors


 

Call for Participation

We invite submission of workshop papers that discuss ongoing or completed work dealing with Inductive Transfer (see below for a list of appropriate topics).  Papers should be no more than four pages in the standard NIPS format (style files and examples). Authorship should not be blind. Please submit a paper by emailing it in Postscript or PDF format to danny.silver@acadiau.ca with the subject line "ITWS Submission". We anticipate accepting as many as 8 papers for 15 minute presentation slots and up to 20 poster papers. Please only submit an article if at least one of the authors will be able to attend the workshop and present the work.

The 1995 workshop identified the most important areas for future research to be:

  • The relationship between computational learning theory and selective inductive bias;

  • The tradeoffs between storing or transferring knowledge in representational and functional form;

  • Methods of turning concurrent parallel learning into sequential lifelong learning methods;

  • Measuring relatedness between learning tasks for the purposed of knowledge transfer;

  • Long-term memory methods and cumulative learning; and

  • The practical applications of inductive transfer and lifelong learning systems.

The workshop is interested in the progress that has been made in these areas over the last ten years.  These remain key topics for discussion at the proposed workshop.   More forward looking and important questions include:

  • Under what conditions is inductive transfer difficult? When is it easy?

  • What are the fundamental requirements for continual learning and transfer?

  • What new mathematical models/frameworks capture/demonstrate transfer learning?

  • What are some of latest and most advanced demonstrations of transfer learning in machines (Bayes, kernel methods, reinforcement)?

  • What can be learned from transfer learning in humans and animals?

  • What are the latest psychological/neurological/computational theories of knowledge transfer in learning?

Research in inductive transfer has continued since 1995 under a variety of names: learning to learn, life-long learning, knowledge transfer, transfer learning, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, meta-learning, continual learning and incremental/cumulative learning.  Papers in any of the above areas are welcome.  The reviewers will give the highest merit to new theories, approaches and algorithms regarding the accumulation and use of learned knowledge for the purposes of more effective and efficient learning.

From a research community and program perspective there are important questions that should be addressed such as:

  • Should we establish a repository and methodology for testing and benchmarking inductive transfer?

  • Is there need for a special interest group on Inductive Transfer?

Last updated 12/01/2005 - Danny Silver