Real-Time Microstructure Rendering with MIP-mapped Normal Map Samples
Haowen Tan*, Junqiu Zhu* (*dual first authors), Xiangxu Meng, Yanning Xu, Lu Wang, Ling-Qi Yan
Computer Graphics Forum (minor revisions)
Normal map-based microstructure rendering method can generate both glint and scratch appearance accurately, but the extra high-resolution normal map
that defines every microfacet normal may incur high storage and computation costs. We present an example-based real-time rendering method
for arbitrary microstructure materials, which also greatly reduces required storage space. Our method takes a small-size normal map sample
as input. We implicitly synthesize a high-resolution normal map from the normal map sample and construct MIP-mapped position-normal 4D Gaussian
lobes. Based on the above MIP-mapped 4D lobes and a LUT data structure for the synthesized high-resolution normal map, an efficient Gaussian query
method is presented to evaluate the P-NDFs (Position-Normal Distribution Functions) for shading. We can render complex scenes with both glint and
scratch surfaces in real-time (> 30 fps) with a full high-definition resolution, and the space required for each microstructure material is decreased to 30MB.
Neural Complex Luminaires: Representation and Rendering
Junqiu Zhu*, Yaoyi Bai*, Zilin Xu* (*equal contribution), Steve Bako, Edgar Velázquez-Armendáriz, Lu Wang, Pradeep Sen, Miloš Hašan, Ling-Qi Yan
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2021)
Physically-based rendering of complex luminaires, such as grand chandeliers in concert halls,
can be extremely costly. The emitting sources are typically encased in complex
refractive geometry, creating difficult light paths that require many samples to
evaluate with Monte Carlo approaches. Previous work has attempted to speed up this process,
but the methods are either inaccurate, require very large lightfield storage, and/or do not fit
well into modern path tracing frameworks. Inspired by the success of deep networks,
which can model complex relationships robustly and be evaluated efficiently,
we propose to use a machine learning framework to compress a complex luminaire's light field into an implicit neural
representation. Our approach can easily plug into conventional renderers, as it works with the standard techniques of
path tracing and multiple importance sampling (MIS). Our solution is to train three networks to perform the essential operations
for evaluating the complex luminaire at a specific point and view direction, importance sampling a point on the luminaire given
a shading location, and blending to determine the transparency of luminaire queries to properly combine them with other scene elements.
We perform favorably relative to state-of-the-art approaches and render final images that are close to the high sample count reference with
only a fraction of the computation and storage costs, with no need to store the original luminaire geometry and materials.
A Stationary SVBRDF Material Modeling Method Based on Discrete Microsurface
Junqiu Zhu, Yanning Xu, Lu Wang
Computer Graphics Forum (Pacific Graphics 2019)
Microfacet theory is commonly used to build reflectance models for surfaces.
While traditional microfacet‐based models assume that the distribution of a surface's microstructure is continuous,
recent studies indicate that some surfaces with tiny, discrete and stochastic facets exhibit glittering visual effects,
while some surfaces with structured features exhibit anisotropic specular reflection. Accordingly, this paper proposes an efficient
and stationary method of surface material modeling to process both glittery and non‐glittery surfaces in a consistent way. Our method
comprises two steps: in the preprocessing step, we take a fixed‐size sample normal map as input, then organize 4D microfacet trees in position and
normal space for arbitrary‐sized surfaces; we also cluster microfacets into 4D K‐lobes via the adaptive k‐means method. In the rendering step,
moreover, surface normals can be efficiently evaluated using pre‐clustered microfacets. Our method is able to efficiently render any structured,
discrete and continuous micro‐surfaces using a precisely reconstructed surface NDF.
Our method is both faster and uses less memory compared to the state‐of‐the‐art glittery surface modeling works.