Home
Overview
Call for
participation
Important Dates
Participants
Schedule
Accepted Papers
Resources
Sponsors
|
|
|
|
Overview
Inductive
transfer or transfer learning refers to the problem of retaining and
applying the knowledge learned in one or more tasks
to efficiently develop an effective hypothesis for a new task. While all learning
involves generalization across problem instances, transfer learning
emphasizes the transfer of knowledge across domains, tasks, and
distributions that are similar but not the same. For example,
learning to recognize chairs might help to recognize tables; or
learning to play checkers might improve the learning of chess. While
people are adept at inductive transfer, even across widely disparate
domains, there exists little associated computation learning theory
and few systems that exhibit knowledge transfer.
At NIPS95 two
of the co-chairs lead a successful two-day workshop
on "Learning to Learn" that focused on the need for lifelong machine
learning methods that retain and reuse learned knowledge. (The
co-organizers of that workshop were Rich Caruana, Danny Silver, Jon
Baxter, Tom
Mitchell, Lorien Pratt, and Sebastian Thrun.) The fundamental
motivation for this meeting was the acceptance that machine learning
systems would benefit from manipulating knowledge learned from
related and/or prior experience and that this would enable them to
move beyond task-specific tabula rasa systems. The workshop
resulted in a series of articles published in a special issue of
Connection Science [CS 1996], Machine Learning [vol. 28, 1997] and a
book entitled "Learning to Learn" [Pratt and Thrun 1998]. Research
in inductive transfer has continued since 1995 under a variety of
names: learning to learn, life-long learning, knowledge transfer,
transfer learning, multitask learning, knowledge consolidation,
context-sensitive learning, knowledge-based inductive bias,
meta-learning, and incremental/cumulative learning. The recent burst
of activity in this area is illustrated by the research in
multi-task learning within the kernel and Bayesian contexts that has
established new frameworks for capturing task relatedness to
improve learning [Ando and Zhang 04, Bakker and Heskes 03, Jebara
04, Evgeniou, and Pontil 04, Evgeniou, Micchelli and Pontil 05,
Chapelle and Harchaoui 05]. This NIPS 2005 workshop will examine
the progress that has been made in ten years, the questions and
challenges that remain, and the opportunities for new applications
of inductive transfer systems.
In
particular, the workshop organizers have identified three major
goals: (1) To summarize the work thus far in the area of inductive
transfer so as to develop a taxonomy of research indicating open
questions, (2) To share new theories, approaches and algorithms
regarding the accumulation and use of learned knowledge for the
purposes of more effective and efficient learning, (3) To discuss a
more formal inductive transfer community (or special interest group)
that might begin by offering a website, benchmarking data and
methods, shared software, and links to various research programs and
other web resources. As an example, please see
http://birdcage.acadiau.ca:8080/ml3/.
|