Stage M2/ Ingenieur

Stage M2/ Ingenieur

Type de recrutement
Stage
Durée
Urgent
oui
Rattachement
GIPSA-lab, Univ. Grenoble Alpes
Fin de l'affichage
Détails (fichier)
Stage/ Internship: Control-Theoretic Enhancements for Gradient-Based Neural Network TrainingContext and MotivationThe intersection of control theory and machine learning has recently emerged as a fertile researcharea, particularly for online training of neural networks. While classical optimization algorithmssuch as gradient descent and Nesterov acceleration are widely adopted, they often face limitationsin convergence speed, robustness, and sensitivity to hyperparameters. Integrating adaptive controlstrategies into the learning process has proven to mitigate these issues.Previous work has highlighted two complementary directions. First, Airimitoaie et al. [2023, 2022]have shown that recursive least squares algorithms with dynamically adjusted adaptation gains cansubstantially improve convergence and robustness in parameter estimation. Second, Zhao et al. [2019,2020] demonstrated that feedback-based and event-driven modulation of learning rates can accelerateonline neural network training while reducing unnecessary computations. Together, these contributionssuggest that control-theoretic principles can be systematically applied to enhance classicalgradient-based optimization methods.Building on these findings, this internship will explore new ways to integrate control mechanismsinto gradient descent and its accelerated variants. The ultimate goal is to design algorithms thatachieve faster convergence, better stability, and robustness in dynamic, online learning scenarios.Internship ObjectivesThe primary objective is to develop and evaluate adaptive control strategies for improving gradientbasedoptimization. The intern will focus on designing controllers that modulate the learning rate andpossibly the update schedule, inspired by the feedback and event-driven approaches observed in priorwork. In addition, the intern will:• Formulate theoretical models to analyze the stability, convergence of the proposed algorithms.• Implement the algorithms in a deep learning framework such as PyTorch or TensorFlow.• Evaluate performance on standard benchmark datasets (e.g., CIFAR-10, CIFAR-100, MNIST)and compare against classical optimizers like Adam, RMSProp, and vanilla gradient descent.• Investigate hybrid strategies combining acceleration techniques with adaptive control for onlineor streaming learning tasks.This combination of narrative description and concise bullet points allows the reader to quicklygrasp the actionable tasks while keeping the overall flow readable and professional.Expected ContributionsThe intern is expected to make both theoretical and practical contributions. They will design algorithmsthat integrate control-theoretic insights with classical optimization methods, and evaluatethem empirically to quantify improvements in speed, stability, and robustness. The work may lead tothe preparation of technicalCandidate ProfileThe ideal candidate is a motivated Master’s student or early-stage PhD in Computer Science, AppliedMathematics, Automatic Control, or a related discipline. They should have:• Good foundations in optimization, machine learning, and control theory.• Experience in Python programming and deep learning frameworks.• Analytical skills to study algorithmic stability and convergence.• Curiosity and independence, with the ability to design experiments and interpret results.Prior experience in online learning, adaptive control, or accelerated optimization is a plus, but notstrictly required.Supervision and EnvironmentThe internship will be conducted at GIPSA-lab, Grenoble FR, an excellent research laboratory ofUniv. Grenoble Alpes and CNRS. The student will work in a collaborative research group specializedin automatic control and machine learning. The intern will benefit from close supervision and accessto high-performance computing resources. The environment combines theoretical modeling, algorithmdesign, and computational experimentation, ensuring a comprehensive research experience.