“Never let a poorly executed sequence pass”
Research Interests
Machine learning
Diffusion models
Sequential decision making, bandit and RL
Kernel methods, Gaussian processes and Bayesian optimisation
Hiring
- PhD in the field of machine learning (Bandit, RL, Bayesian optimisation) at UCL supported by MediaTek Research. Please feel free to email your resume if you are interested.
- MediaTek Research is hiring AI researchers and interns to work on research oriented projects in generative models, AI for communication, RL, Bandits, DL, GPs, Optimization and more. Feel free to get in touch if you are interested. Our offices are located in Cambridge, London and Taipei.
Supervision
Sing-Yuan Yeh
Mark Chang
Sudeep Salgia
Nacime Bouziani
Danyal Ahmed
Ayman Boustati
Latest
- “Sample Complexity of Kernel-Based Q-Learning” is accepted at AISTATS 2023.
- “Fisher-Legendre (FishLeg) optimization of deep neural networks” is accepted at ICLR 2023.
- “Delayed Feedback in Kernel Bandits” is available on arXiv.
- “Provably and Practically Efficient Neural Contextual Bandits” is available on arXiv.
- “Near-Optimal Collaborative Learning in Bandits“, a joint wok with Clémence Réda and Emilie Kaufmann, is accepted at NeurIPS 2022.
- Presetnting “Gradient Desent: Robustness to Adversarial Curroption” at OPT2022 workshop, NeurIPS 2022 (3 Dec, New Orleans)
- Presenting an open problem on noise-free kernel-based bandit at COLT 2022 (July 2-5, London)
- “Near-Optimal Collaborative Learning in Bandits” is available on arXiv.
- “Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning” is accepted to be presented at ICML 2022 (17-23 July)
- (2022) MediaTek Research is hiring AI researchers and interns to work on research oriented projects in RL, Bandits, DL, GPs, Optimization and more. Feel free to get in touch if you are interested! Our offices are located in Cambridge, UK and Taipei, Taiwan.
- Our papers accepted at Neurips 2021:
- I will be moderating the “Bandits, RL and Control” session at COLT 2021.
- Check out our open problem at COLT 2021: “Tight Online Confidence Intervals for RKHS Elements“
- Neural tangent kernel is similar to Matérn: “Uniform Generalization Bounds for Overparameterized Neural Networks“
- “On Information Gain and Regret Bounds in Gaussian Process Bandits” is accepted to be presented at AIStats 2021 (13-15 April).
- “A Computationally Efficient Approach to Black-box Optimization using Gaussian Process Models” is available on ArXiv.
- “On Information Gain and Regret Bounds in Gaussian Process Bandits” is available on ArXiv.
- “Scalable Thompson Sampling using Sparse Gaussian Process Models” is available on ArXiv.
- “Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization” is accepted to be presented at ICML 2020 (12-18 July).
- “Amortized variance reduction for doubly stochastic objectives” is accepted to be presented at UAI 2020 (4-6 August).
- “Multi-Armed Bandits on Unit Interval Graphs“ is accepted to be published in IEEE transaction on Network Science and Engineering (2020).