Reconstructing Hand-Held Objects in 3D
from Images and Videos


Jane Wu1     Georgios Pavlakos2     Georgia Gkioxari3     Jitendra Malik1
1University of California, Berkeley     2University of Texas at Austin     3California Institute of Technology

Paper
Paper
Code
Code



Abstract: Objects manipulated by the hand (i.e., manipulanda) are particularly challenging to reconstruct from Internet videos. Not only does the hand occlude much of the object, but also the object is often only visible in a small number of image pixels. At the same time, two strong anchors emerge in this setting: (1) estimated 3D hands help disambiguate the location and scale of the object, and (2) the set of manipulanda is small relative to all possible objects. With these insights in mind, we present a scalable paradigm for hand-held object reconstruction that builds on recent breakthroughs in large language/vision models and 3D object datasets. Given a monocular RGB video, we aim to reconstruct hand-held object geometry in 3D, over time. In order to obtain the best performing single frame model, we first present MCC-Hand-Object (MCC-HO), which jointly reconstructs hand and object geometry given a single RGB image and inferred 3D hand as inputs. Subsequently, we prompt a text-to-3D generative model using GPT-4(V) to retrieve a 3D object model that matches the object in the image(s); we call this alignment Retrieval-Augmented Reconstruction (RAR). RAR provides unified object geometry across all frames, and the result is rigidly aligned with both the input images and 3D MCC-HO observations in a temporally consistent manner. Experiments demonstrate that our approach achieves state-of-the-art performance on lab and Internet image/video datasets.

Method Overview


Overview. Given an RGB image and estimated 3D hand, our method reconstructs hand-held objects in three stages. In the first stage, MCC-HO is used to predict hand and object point clouds. In the second stage, a template mesh for the object is obtained using Retrieval-Augmented Reconstruction (RAR). In the third stage, the template object mesh is rigidly aligned to the network-inferred geometry using ICP.


MCC-Hand-Object (MCC-HO)


MCC-HO test results. The input image (top, left), network-inferred hand-object point cloud (top, right), rigid alignment of the ground truth mesh with the point cloud using ICP (bottom, left), and an alternative view of the point cloud (bottom, right) are shown.


Retrieval-Augmented Reconstruction (RAR)


Retrieval-Augmented Reconstruction (RAR). Given an input image, we prompt GPT-4(V) to recognize and provide a text description of the hand-held object. The text description is passed to a text-to-3D model (in our case, Genie) to obtain a 3D object.



Results

Our approach applied to in-the-wild Internet videos from the 100 Days of Hands (100DOH) dataset.






























Citation



Acknowledgements

We would like to thank Ilija Radosavovic for helpful discussions and feedback. This work was supported by ONR MURI N00014-21-1-2801. JW was supported by the NSF Mathematical Sciences Postdoctoral Fellowship and the UC Presidentโ€™s Postdoctoral Fellowship. This webpage template was borrowed from some colorful folks. Icons: Flaticon.