Working Paper
  1. Kato, M., Okumura, K., Ishihara, T., and Kitagawa, T.,
    Adaptive Experimental Design for Policy Learning.
  2. Cabel, D., Sugasawa, S., Kato, M., Takanashi, K. and McAlinn, K.,
    Bayesian spatial predictive synthesis.
  3. Ariu, K., Kato, M., Komiyama, J., and McAlinn, K.,(Alphabetical order)
    Policy Choice and Best Arm Identification: Comments on "Adaptive Treatment Assignment in Experiments for Policy Choice"
    First draft: 16 Sep 2021. [arXiv]
    Revise and Resubmit for Econometrica
  4. Kato, M., and Ariu, K.,
    The Role of Contextual Information in Best Arm Identification
    First draft: 26 Jun 2021. [arXiv] Reject and Resubmit for Journal of Machine Learning Research
    Submitted
  5. Kato, M., Nakagawa, K., Abe, K., and Morimura, T.,
    Direct Expected Quadratic Utility Maximization for Mean-Variance Controlled Reinforcement Learning
    First draft: 29 Sept 2020; Update 3 Apr 2021. [arXiv]
  6. Kato, M., Ishihara, T., Honda, J., and Narita, Y.,
    Efficient Adaptive Experimental Design for Average Treatment Effect Estimation
Conference Proceedings
  1. Kato, M.*, Oga, A., Komatsubara, W., and Inokuchi, R.,
    Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate Choices
    In ICML 2024 [arXiv]
  2. Kato, M.*, Imaizumi, M., and Minami, K.
    Unified Perspective on Probability Divergence via Maximum Likelihood Density Ratio Estimation: Bridging KL-Divergence and Integral Probability Metrics
    In AISTATS 2023 [arXiv]
  3. Yasui, S.*, and Kato, M.*, (* Equal contribution)
    Learning Classifiers under Delayed Feedback with a Time Window Assumption
    In KDD 2022 [arXiv]
  4. Kato, M., Imaizumi, M., and Kakehi H., McAlinn, K., Yasui, S.
    Learning Causal Relationships from Conditional Moment Conditions by Importance Weighting
    In ICLR 2022 (Spotlight) [openreview][arXiv][slide][poster]
  5. Kato, M., Yasui, S., and McAlinn, K.,
    The Adaptive Doubly Robust Estimator for Policy Evaluation in Adaptive Experiments and a Paradox Concerning Logging Policy.
    In NeurIPS 2021. [arXiv]
  6. Kato, M., and Teshima, T.,
    Non-negative Bregman divergence minimization for deep direct density ratio estimation.
    In ICML 2021.
  7. Togashi, R., Kato, M., Otani, M., Sakai, T., and Satoh. S.,
    Scalable Personalised Item Ranking through Parametric Density Estimation.
    In SIGIR 2021.
  8. Togashi, R., Kato, M., Otani, M., and Satoh. S.,
    Density-Ratio Based Personalised Ranking from Implicit Feedback.
    In The Web Conference (WWW) 2021.
  9. Kato, M.*, Uehara, M.*, and Yasui, S., (* Equal contribution)
    Off-Policy Evaluation and Learning for External Validity under a Covariate Shift.
    In NeurIPS 2020 (Spotlight).
  10. Kato, M., Teshima, T., and Honda, J.,
    Learning from positive and unlabeled data with a selection bias.
    In ICLR 2019. [openreview]
Journal
  1. Komiyama, J., Ariu. K., Kato. M., and Qin, C.,
    Optimal simple regret in bayesian best arm identification
    Mathematics of Operations Research.
  2. Kato, M., and Ito. S.
    Best-of-Both-Worlds Linear Contextual Bandits
    Transactions on Machine Learning Research.
Workshop Presentation
  1. Fukuda, A., Kato, M., McAlinn, K., and Takanashi, K.,
    Bayesian Predictive Synthetic Control Methods
    In ICML 2023 Workshop on Counterfactuals in Minds and Machines. [Google Drive]
  2. Kato. M., Imaizumi, M., Ishihara, T., and Kitagawa, T.,
    Fixed-Budget Hypothesis Best Arm Identification: On the Information Loss in Experimental Design
    In ICML 2023 Workshop on New Frontiers in Learning, Control, and Dynamical Systems. [openreview]
  3. Kato. M., Imaizumi, M., Ishihara, T., and Kitagawa, T.,
    Semiparametric Best Arm Identification with Contextual Information
    In IBIS. [arXiv] [poster]
  4. Kato, M., Imaizumi, M., McAlinn, K., Yasui, S., and Kakehi H.
    Learning Causal Relationships from Conditional Moment Conditions by Importance Weighting
    In NeurIPS 2021 Workshop on Machine Learning meets Econometrics. [arXiv]
  5. Kato, M., Nakagawa, K., Abe, K., and Morimura, T.,
    Direct Expected Quadratic Utility Maximization for Mean-Variance Controlled Reinforcement Learning
    In NeurIPS 2021 Workshop on Deep Reinforcement Learning. [arXiv]
  6. Kato, M., Yasui, S., and McAlinn, K.,
    The Adaptive Doubly Robust Estimator for Policy Evaluation in Adaptive Experiments.
    In ICML 2021 Workshop on The Neglected Assumptions In Causal Inference. [arXiv]
  7. Kato, M., Ishihara, T., Honda, J., and Narita, Y.,
    Adaptive Experimental Design for Efficient Treatment Effect Estimation.
    In NeurIPS 2020 Workshop on Causal Discovery & Causality-Inspired Machine Learning.