HDhuman: High-quality Human Performance Capture with Sparse Views

arXiv 2022


Tiansong Zhou1,2, Tao Yu1, Ruizhi Shao1, Kun Li2

1Tsinghua University    2Tianjin University   

Abstract


In this paper, we introduce HDhuman, a method that addresses the challenge of novel view rendering of human performers that wear clothes with complex texture patterns using a sparse set of camera views. Although some recent works have achieved remarkable rendering quality on humans with relatively uniform textures using sparse views, the rendering quality remains limited when dealing with complex texture patterns as they are unable to recover the high-frequency geometry details that observed in the input views. To this end, the proposed HDhuman uses a human reconstruction network with a pixel-aligned spatial transformer and a rendering network that uses geometry-guided pixel-wise feature integration to achieve high-quality human reconstruction and rendering. The designed pixel-aligned spatial transformer calculates the correlations between the input views, producing human reconstruction results with high-frequency details. Based on the surface reconstruction results, the geometry-guided pixel-wise visibility reasoning provides guidance for multi-view feature integration, enabling the rendering network to render high-quality images at 2k resolution on novel views. Unlike previous neural rendering works that always need to train or fine-tune an independent network for a different scene, our method is a general framework that is able to generalize to novel subjects. Experiments show that our approach outperforms all the prior generic or specific methods on both synthetic data and real-world data.

Fig.1 Architecture of our reconstruction network. We use pixel-aligned spatial transformer for multi-view feature fusion, enabling us to produce pixel-aligned highly detailed reconstruction results.

Fig.2 Architecture of our rendering network. The geometry-guided pixel-wise feature integration enable us to solve the severe occlusions caused by the sparsity of input views, resulting in high quality rendering.


Resuls on Twindom dataset (Only 6 views as input)



Free-view rendering on real-world data (8 views as input)



Citation


@misc{zhou2022hdhuman,
  title={HDhuman: High-quality Human Performance Capture with Sparse Views}, 
  author={Tiansong Zhou and Tao Yu and Ruizhi Shao and Kun Li},
  year={2022},
  eprint={2201.08158},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}