scholar.google.com › citations
Sep 22, 2023 · Knowledge distillation transfers knowledge from the teacher model to the student model, aiming to improve the student's performance.
Sep 26, 2023 · Knowledge distillation transfers knowledge from the teacher model to the student model, aiming to improve the student's performance.
Nov 21, 2024 · Knowledge distillation transfers knowledge from the teacher model to the student model, aiming to improve the student's performance.
Jan 3, 2024 · We propose TempDistiller, a Temporal knowledge Distiller, to acquire long-term memory from a teacher detector when provided with a limited number of frames.
Sep 6, 2023 · In this study, we devise a Dual Masked Knowledge Distillation (DMKD) framework which can capture both spatially important and channel-wise informative clues.
Missing: Enhancement | Show results with:Enhancement
Mar 11, 2024 · The mask-guided distillation is designed to emphasise students' learning of close-to-object features via multi-value masks, while relation-based ...
This paper proposes a technique called Masked Knowledge Distillation (MKD) that enhances this process using a masked autoencoding scheme. In MKD, random patches ...
In this paper, we conduct an analysis of the properties of different feature layers in ViT to identify a method for feature-based ViT distillation.
In recent years, current mainstream feature masking distillation methods mainly function by reconstructing selectively masked regions of a student network ...
Different Generative Block for Deep Distillation. For generation, we randomly mask the student's tokens and utilize a generative block to restore the feature.