게시판 » Paper work »
NIPS 보고서용 논문 사이트
Jinwuk Admin이(가) 2년 이상 전에 추가함
Is Out-of-distribution Detection Learnable?
by Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, Feng Liu
Tues Nov 29 — Poster Session 1
Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
by Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed
Thurs Dec 1 — Poster Session 5
Elucidating the Design Space of Diffusion-Based Generative Models
by Tero Karras, Miika Aittala, Timo Aila, Samuli Laine
Wed Dec 7 — Featured Papers Panels 3B
High-dimensional limit theorems for SGD: Effective dynamics and critical scaling
by Gerard Ben Arous, Reza Gheissari, Aukosh Jagannath
This work studies the scaling limits of SGD with constant step-size in the high-dimensional regime. It shows how complex SGD can be if the step size is large. Characterizing the nature of SDE and comparing it to the ODE when the step size is small gives insights into the nonconvex optimization landscape.
Gradient Descent: The Ultimate Optimizer
by Kartik Chandra, Audrey Xie, Jonathan Ragan-Kelley, Erik Meijer
This paper reduces sensitivity to hyperparameters in gradient descent by developing a method to optimize with respect to hyperparameters and recursively optimize hyper-hyperparameters. Since gradient descent is everywhere, the potential impact is tremendous.
Wed Nov 30 — Poster Session 4
Riemannian Score-Based Generative Modelling
by Valentin De Bortoli, Emile Mathieu, Michael John Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet
The paper generalizes score-based generative model (SGM) from Euclidean space to Riemannian manifolds by identifying major components that contribute to the success of SGMs. The method is both a novel and technically useful contribution.
Wed Nov 30 — Poster Session 4
Gradient Estimation with Discrete Stein Operators
by Jiaxin Shi, Yuhao Zhou, Jessica Hwang, Michalis Titsias, Lester Mackey
This paper considers gradient estimation when the distribution is discrete. Most common gradient estimators suffer from excessive variance. To improve the quality of gradient estimation, they introduce a variance reduction technique based on Stein operators for discrete distributions. Even though Stein operator is classical, this work provides a nice interpretation of it for gradient estimation and also shows practical improvement in experiments.
Tues Nov 29 — Poster Session 1