Chinmaya Kausik

Mathematics Ph.D. Student, University of Michigan. Curriculum Vitae.

prof_pic.jpg

3852 East Hall

530 Church Street

Ann Arbor, MI 48104

Hi there! I’m Chinmaya Kausik, a 3rd year mathematics Ph.D. candidate at UMich working on sequential decision making, statistics, optimization and machine learning. I am being co-advised by Prof. Ambuj Tewari and Prof. Martin Strauss.

Check out my papers, projects, and personal interests!

What do I care about, academically?
  • Mathematical problems motivated by tangible, real-world questions. These days, my work focuses on sequential decision making under various settings - offline-to-online transfer, partial observability/latent information and non-standard feedback and reward models. I also have side projects in deep learning. On the other hand, a lot of my undergraduate background was in geometry, topology and dynamics, with work in computer-assisted topology and geometry.
  • Increasing accessibility to and in higher mathematics and creating communities where ideas cross pollinate and people pull each other up. I have started the Stats, Physics, Astronomy, Math (SPAM) graduate student social initiative at the University of Michigan. I also co-founded and co-organize Monsoon Math Camp. I have also been involved in building and expanding other mathematical communities, like platforms for the PolyMath REU, DRP programs and the undergraduate math organization at IISc, etc.
What am I doing these days?
  • Collaborating with Yonathan Efroni (Meta), Aadirupa Saha (Apple), Nadav Merlis (ENSEA) on algorithms for bandit and reinforcement learning algorithms with feedback at varying costs and accuracies, also called multi-fidelity feedback.
  • Working on unifying various reward and problem frameworks in reinforcement learning and bandits with Aldo Pacchiano (Broad Institute of MIT and Harvard) and Mirco Mutti (Politecnico di Milano)
  • Designing optimal algorithms for offline-to-online transfer in latent bandits with Kevin Tan (University of Pennsylvania).
  • Extending my work with Rishi Sonthalia (UCLA) and Kashvi Srivastava (UMich) to more complex denoising models.
  • Working on algorithms for offline policy evaluation (OPE) in linear bandits, and the role of the geometry of action sets.
  • Continuing work on our project from LOGML 2022! I was a participant in Dr. Eli Meirom’s group, planning to work on using RL for graph rewiring in GNNs to prevent oversquashing for long range problems.
  • Thinking about extensions of De Finetti’s theorem to decision processes.
  • Organizing an interdepartmental social initiative, SPAM (Statistics, Physics, Astronomy, Mathematics).
  • Fleshing out ideas for more academic communities like Monsoon Math.
What do I want to learn about/do in the future?

primary goals

  • Work on preference-based variants of sequential decision making problems.
  • Using the multi-step inverse kinematics perspective for designing algorithms that work outside of Markovian assumptions and have strong empirical performance.
  • Explore multi-objective decision making.
  • Start maintaining my progress log again.
  • Learn about safe RL and think about techniques beyond primal-dual ones, perhaps using model-based RL with uncertain models.
  • Watch lectures from the Data Driven Decision Processes program at the Simons Institute this semester.

side-quests

  • Causal inference and its interaction with sequential decision making and RL.
  • Using insights from machine learning for biology. In a specific example, learning a hierarchical or causal structure from genomics.
  • Dive deeper into the theory behind GNNs and deep learning in general.
  • Algorithmic fairness.

news

Nov 29, 2023 I have received the Rackham International Student Fellowship, which is offered to 25 students across graduate departments under Rackham!
Oct 29, 2023 Our paper “Denoising Low-Rank Data Under Distribution Shift: Double Descent and Data Augmentation” has been accepted to the NeurIPS workshop on the Mathematics of Modern Machine Learning (M3L)!
Jun 15, 2023 Two new preprints (confounded RL and double descent phenomena with input noise) added to arXiv!
Apr 29, 2023 My paper on learning mixtures of Markov chains and MDPs with Kevin Tan and my advisor, Prof. Tewari, has been accepted to ICML 2023 (Oral)!
Apr 7, 2023 Invited to attend the Princeton ML theory Summer School, 2023!