Beyond Low-rank Decomposition: A Shortcut Approach for Efficient On-Device Learning

Research output: Contribution to journalConference articlepeer-review

Abstract

On-device learning has emerged as a promising direction for AI development, particularly because of its potential to reduce latency issues and mitigate privacy risks associated with deviceserver communication, while improving energy efficiency. Despite these advantages, significant memory and computational constraints still represent major challenges for its deployment. Drawing on previous studies on low-rank decomposition methods that address activation memory bottlenecks in backpropagation, we propose a novel shortcut approach as an alternative. Our analysis and experiments demonstrate that our method can reduce activation memory usage, even up to 120.09× compared to vanilla training, while also reducing overall training FLOPs up to 1.86× when evaluated on traditional benchmarks. The code is available at https://github.com/LeTrungNguyen/ICML2025-ASI.git.

Original languageEnglish
Pages (from-to)46196-46210
Number of pages15
JournalProceedings of Machine Learning Research
Volume267
Publication statusPublished - 1 Jan 2025
Event42nd International Conference on Machine Learning, ICML 2025 - Vancouver, Canada
Duration: 13 Jul 202519 Jul 2025

Fingerprint

Dive into the research topics of 'Beyond Low-rank Decomposition: A Shortcut Approach for Efficient On-Device Learning'. Together they form a unique fingerprint.

Cite this