Jump to Content

Primary Hero

description text

alt text

description text

Heading

In publishing and graphic design, Lorem ipsum is a placeholder text commonly used to demonstrate the visual form of a document or a typeface without relying on meaningful content. Lorem ipsum may be used as a placeholder before the final copy is available

Title

Connectomics2024-1-ExcitatoryNeurons

Case Studye

Title here
Connectomics2024-1-ExcitatoryNeurons

Case Studye

Title here
Connectomics2024-1-ExcitatoryNeurons

Case Studye

Title here
Connectomics2024-1-ExcitatoryNeurons

Case Studye

Title here

Acknowledgements[cc4ac8]

We give special thanks to the Imagen Video team for their collaboration and for providing their system to do super resolution. To our artist friends Irina Blok and Alonso Martinez for extensive creative exploration of the system and for using Phenaki to generate some of the videos showcased here. We also want to thank Niki Parmar for initial discussions. Special thanks to Gabriel Bender and Thang Luong for reviewing the paper and providing constructive feedback. We appreciate the efforts of Kevin Murphy and David Fleet for advising the project and providing feedback throughout. We are grateful to Evan Rapoport, Douglas Eck and Zoubin Ghahramani for supporting this work in a variety of ways. Tim Salimans and Chitwan Saharia helped us with brainstorming and coming up with shared benchmarks. Jason Baldridge was instrumental for bouncing ideas. Alex Rizkowsky was very helpful in keeping things organized, while Erica Moreira and Victor Gomes ensured smooth resourcing for the project. Sarah Laszlo and Kathy Meier-Hellstern have greatly helped us incorporate important responsible AI practices into this project, which we are immensely grateful for. Finally, Blake Hechtman and Anselm Levskaya were generous in helping us debug a number of JAX issues.

Credit for Phenakistoscope asset:

Creator: Muybridge, Eadweard, 1830-1904, artist
Title: The zoopraxiscope* - a couple waltzing (No. 35., title from item.)
Edits made: Extended background and converted file format to mp4

Connectomics2024-2-InhibitoryNeurons

Caption alignment center.

Code block - Padding Bottom

this is the caption

Dynamic Accordion - Padding Bottom

Item

asdfasdfasdfasdf

Item

asdfasdfasdfasdf

Item

asdfasdfasdfasdf

Dynamic Accordion - Padding Both

Item

asdfasdfasdfasdf

Item

asdfasdfasdfasdf

Item

asdfasdfasdfasdf

Multi-Column - Both Padding

  • Top Caption

    alt text
  • Top Caption

    alt text
  • Top Caption

    alt text
  • Top Caption

    alt text

Mixed Publications List - Bottom Padding

De-rendering the World’s Revolutionary Artefacts
Elliott Wu
Jiajun Wu
Angjoo Kanazawa
Computer Vision and Pattern Recognition (CVPR) (2021)
Preview abstract Recent works have shown exciting results in unsupervised image de-rendering—learning to decompose 3D shape, appearance, and lighting from single-image collections without explicit supervision. However, many of these assume simplistic material and lighting models. We propose a method, termed RADAR (Revolutionary Artefact De-rendering And Re-rendering), that can recover environment illumination and surface materials from real single-image collections, relying neither on explicit 3D supervision, nor on multi-view or multi-light images. Specifically, we focus on rotationally symmetric artefacts that exhibit challenging surface properties including specular reflections, such as vases. We introduce a novel self-supervised albedo discriminator, which allows the model to recover plausible albedo without requiring any ground-truth during training. In conjunction with a shape reconstruction module exploiting rotational symmetry, we present an end-to-end learning framework that is able to de-render the world's revolutionary artefacts. We conduct experiments on a real vase dataset and demonstrate compelling decomposition results, allowing for applications including free-viewpoint rendering and relighting. View details
De-rendering the World’s Revolutionary Artefacts
Elliott Wu
Jiajun Wu
Angjoo Kanazawa
Computer Vision and Pattern Recognition (CVPR) (2021)
Preview abstract Recent works have shown exciting results in unsupervised image de-rendering—learning to decompose 3D shape, appearance, and lighting from single-image collections without explicit supervision. However, many of these assume simplistic material and lighting models. We propose a method, termed RADAR (Revolutionary Artefact De-rendering And Re-rendering), that can recover environment illumination and surface materials from real single-image collections, relying neither on explicit 3D supervision, nor on multi-view or multi-light images. Specifically, we focus on rotationally symmetric artefacts that exhibit challenging surface properties including specular reflections, such as vases. We introduce a novel self-supervised albedo discriminator, which allows the model to recover plausible albedo without requiring any ground-truth during training. In conjunction with a shape reconstruction module exploiting rotational symmetry, we present an end-to-end learning framework that is able to de-render the world's revolutionary artefacts. We conduct experiments on a real vase dataset and demonstrate compelling decomposition results, allowing for applications including free-viewpoint rendering and relighting. View details
De-rendering the World’s Revolutionary Artefacts
Elliott Wu
Jiajun Wu
Angjoo Kanazawa
Computer Vision and Pattern Recognition (CVPR) (2021)
Preview abstract Recent works have shown exciting results in unsupervised image de-rendering—learning to decompose 3D shape, appearance, and lighting from single-image collections without explicit supervision. However, many of these assume simplistic material and lighting models. We propose a method, termed RADAR (Revolutionary Artefact De-rendering And Re-rendering), that can recover environment illumination and surface materials from real single-image collections, relying neither on explicit 3D supervision, nor on multi-view or multi-light images. Specifically, we focus on rotationally symmetric artefacts that exhibit challenging surface properties including specular reflections, such as vases. We introduce a novel self-supervised albedo discriminator, which allows the model to recover plausible albedo without requiring any ground-truth during training. In conjunction with a shape reconstruction module exploiting rotational symmetry, we present an end-to-end learning framework that is able to de-render the world's revolutionary artefacts. We conduct experiments on a real vase dataset and demonstrate compelling decomposition results, allowing for applications including free-viewpoint rendering and relighting. View details

Quote - Padding Bottom

quote
Name

Quote - Padding Bottom

quote
Name

Rich Text With Footnotes - Padding Bottom

Rich Text With Table - Padding Bottom

Sound Players - Padding Bottom


  1. Footnote Test

  2. Test footnotes for Rich text acknowledgement