TY - JOUR
T1 - From multi-label learning to cross-domain transfer
T2 - a model-agnostic approach
AU - Read, Jesse
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2023/11/1
Y1 - 2023/11/1
N2 - In multi-label learning it has been widely assumed in the literature that, to obtain best accuracy, the dependence among the labels should be explicitly modeled. This premise led to a proliferation of methods offering techniques to learn and predict labels together (joint modeling). Even though it is now acknowledged that in many contexts a model of dependence is not required for optimal performance, such models continue to outperform independent models in some of those very contexts, suggesting alternative explanations for their performance beyond label dependence. In this article we turn the original premise of multi-label learning on its head, and approach the problem of joint-modeling specifically under the absence of any measurable dependence among task labels. The insights from this study allow us to design a method for cross-domain transfer learning which, unlike most contemporary methods of this type, is model-agnostic (any base model class can be considered) and does not require any access to source data. The results we obtain have important implications and we provide clear directions for future work, both in the areas of multi-label and transfer learning.
AB - In multi-label learning it has been widely assumed in the literature that, to obtain best accuracy, the dependence among the labels should be explicitly modeled. This premise led to a proliferation of methods offering techniques to learn and predict labels together (joint modeling). Even though it is now acknowledged that in many contexts a model of dependence is not required for optimal performance, such models continue to outperform independent models in some of those very contexts, suggesting alternative explanations for their performance beyond label dependence. In this article we turn the original premise of multi-label learning on its head, and approach the problem of joint-modeling specifically under the absence of any measurable dependence among task labels. The insights from this study allow us to design a method for cross-domain transfer learning which, unlike most contemporary methods of this type, is model-agnostic (any base model class can be considered) and does not require any access to source data. The results we obtain have important implications and we provide clear directions for future work, both in the areas of multi-label and transfer learning.
KW - Multi-label learning
KW - Multi-task learning
KW - Transfer learning
U2 - 10.1007/s10489-023-04841-9
DO - 10.1007/s10489-023-04841-9
M3 - Article
AN - SCOPUS:85166570657
SN - 0924-669X
VL - 53
SP - 25135
EP - 25153
JO - Applied Intelligence
JF - Applied Intelligence
IS - 21
ER -