Paper |
Code |
|
||
Abstract: Objects manipulated by the hand (i.e., manipulanda) are particularly challenging to reconstruct from Internet videos. Not only does the hand occlude much of the object, but also the object is often only visible in a small number of image pixels. At the same time, two strong anchors emerge in this setting: (1) estimated 3D hands help disambiguate the location and scale of the object, and (2) the set of manipulanda is small relative to all possible objects. With these insights in mind, we present a scalable paradigm for hand-held object reconstruction that builds on recent breakthroughs in large language/vision models and 3D object datasets. Given a monocular RGB video, we aim to reconstruct hand-held object geometry in 3D, over time. In order to obtain the best performing single frame model, we first present MCC-Hand-Object (MCC-HO), which jointly reconstructs hand and object geometry given a single RGB image and inferred 3D hand as inputs. Subsequently, we prompt a text-to-3D generative model using GPT-4(V) to retrieve a 3D object model that matches the object in the image(s); we call this alignment Retrieval-Augmented Reconstruction (RAR). RAR provides unified object geometry across all frames, and the result is rigidly aligned with both the input images and 3D MCC-HO observations in a temporally consistent manner. Experiments demonstrate that our approach achieves state-of-the-art performance on lab and Internet image/video datasets. |
|
||
Overview. Given an RGB image and estimated 3D hand, our method reconstructs hand-held objects in three stages. In the first stage, MCC-HO is used to predict hand and object point clouds. In the second stage, a template mesh for the object is obtained using Retrieval-Augmented Reconstruction (RAR). In the third stage, the template object mesh is rigidly aligned to the network-inferred geometry using ICP. |
|
||
MCC-HO test results. The input image (top, left), network-inferred hand-object point cloud (top, right), rigid alignment of the ground truth mesh with the point cloud using ICP (bottom, left), and an alternative view of the point cloud (bottom, right) are shown. |
|
||
Retrieval-Augmented Reconstruction (RAR). Given an input image, we prompt GPT-4(V) to recognize and provide a text description of the hand-held object. The text description is passed to a text-to-3D model (in our case, Genie) to obtain a 3D object. |
|
|
|
||||||
|
|
|
||||||
|
|
|
||||||
|
|
|
||||||
|
|
|
||||||
|
|
|
||||||
|
|
|
||||||
|
|
|
||||||
|
|
|
Acknowledgements |