Research
My research interests focus on the real-world applications of AI models.
I am interested in representation learning and transfer learning, particularly at the intersection of uncertainty (probability) and machine learning.
My long-term research goal is to understand, evaluate, and enhance modern AI models, such as pre-trained and large foundation models.
To achieve this goal, I develop new theories, algorithms, and applications.
Recently, I have been particularly interested in large language models (LLMs) that can refine themselves through self-prediction.
|
|
Model-based Preference Optimization in Abstractive Summarization without Human Feedback
Jaepill Choi*, Kyubyung Chae*, Jiwoo Song, Yohan Jo, Taesup Kim.
EMNLP, 2024
Arxiv
|
|
Mitigating Hallucination in Abstractive Summarization with Domain-Conditional Mutual Information
Kyubyung Chae*, Jaepill Choi*, Yohan Jo, Taesup Kim.
NAACL Findings, 2024
Paper /
Arxiv /
Code
|
|
Uncertainty-Guided Online Test-time Adaptation via Meta-Learning
Kyubyung Chae, Taesup Kim.
ICML Workshop, 2023
Paper /
Poster /
Code /
Workshop
|
|
Test-time Adaptation with Energy-based Indicator in Open-set Scenarios
Yewon Han, Kyubyung Chae, Taesup Kim.
KCC, 2024
Best Paper
|
|
Fine-tuning Language Models to Alleviate Hallucinations in Abstractive Summarization
Jaepill Choi, Kyubyung Chae, Jiwoo Song, Taesup Kim.
KCC, 2024
Paper
|
Work Experience
- LG CNS, Seoul, Data Science Specialist, Aug 2018 - Feb 2022
ยท 3 yrs 7 mos
|
Education
- PhD student, Seoul National University, Data Science, Mar 2024 - Present
- Master's degree, Seoul National University, Data Science, Mar 2022 - Feb 2024
- Bachelor's degree, Yonsei University, Applied Statistics, Mar 2011 - Feb 2018
|
The source of this website is from here.
|
|