1. Introduction
A triangulated mesh is one of the typical data types for representing 3D models. Commonly, triangulated meshes can be generated by using the original 3D coordinate datas that collected by 3D model scanning equipments, such as Kinect, laser scanner, CT, etc. However, there are many noises in the original 3D coordinate datas, and the noises also are generated in the 3D models reconstruction process [
1,
2]. These noises will bring challenges to 3D models visualization, splitting, spatial analysis, object extraction and 3D printing etc. [
3,
4,
5]. Therefore, it is very significant to eliminate noises in triangulated meshes. The key issue is how to retain the original geometric structures and fine details when eliminate the noises. This problem becomes more challenging for the surfaces including complex shapes (e.g., narrow structures, multi-scale features, and fine details).
Over recent decades, the filtering methods have been widely used in mesh denoising, and can be roughly divided into isotropic and anisotropic filtering methods. The classical isotropic methods [
6,
7] mainly focus on removing the surface noise, but they neglect to preserve geometric features during the filtering process. Thus, these isotropic methods tend to produce denoised results with significant shape distortion. To address this issue, many anisotropic filtering methods [
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20] have been proposed. Bilateral filtering is a representative method in these anisotropic methods, which has been successfully applied in image processing for its ability of preserving features. Due to the success of bilateral filtering in image processing, it has been extended to geometry processing. Fleishman et al. [
12] proposed a bilateral mesh denoising, which can directly remove noise via smoothing vertex positions. Zheng et al. [
15] proposed a bilateral normal filtering (BNLF) by using normal filtering followed by vertex updating. Although the bilateral normal filtering proposed in [
15] can preserve geometric features to some extent, it cannot effectively preserve sharp features, multi-scale features, and fine details in the case of high noise. The reason may be as follows. The bilateral normal filtering lacks a reliable guidance normal field to facilitate the filtering process. Thus, Zhang et al. [
21] presented a patch-shift method to compute the guidance normal field for facilitating the bilateral normal filtering. Their method can preserve sharp features well, but may blur small-scale features and fine details because of their consistent neighborhoods constructing strategy. More specifically, when the surface contains complex shapes (e.g., narrow structures, multi-scale features, fine details), the uniformly constructed neighborhoods by their method inevitably contain geometric features in them, which further causes these contained features to be blurred in the normal filtering process. Thus, it is still an open problem to find an effective strategy to construct local neighborhoods that avoid including any geometric features. With the help of the well constructed local neighborhoods (without any geometric features contained), we can compute a guide normal field that strictly matches the underlying shape of the surface, which will greatly improve the results of the guided normal filtering.
In recent years, optimization-based methods are another kind of technique for mesh and image denoising. In order to preserve sharp features, the sparse optimization methods have been widely applied [
3,
5,
22,
23,
24,
25,
26,
27,
28]. He and Schaefer [
22] extended
minimization to triangulated meshes for recovering piecewise constant surfaces. Zhang et al. [
23] and Wu et al. [
24] applied TV (total variation) regularization to mesh denoising for its edge-preserving property. Although the above
and
minimization methods achieve impressive results for preserving sharp features, they inevitably suffer from the undesire staircase artifacts in smoothly curved regions. In particular, this drawback is more severe for
minimization [
22,
29], which may flatten some weak features and produce false edges in smoothly curved regions. Liu et al. [
25] and Zhong et al. [
26,
30] proposed high-order based methods to overcome the above limitations of works [
22,
23,
24,
29]. Their methods can preserve sharp features and simultaneously recover smoothly curved regions well. However, in the presence of high noise, these high-order based methods may smooth sharp features and blur fine details. Many low-rank optimization methods [
31,
32,
33,
34] were introduced in mesh denoising to recover pattern similarity patches of the underlying surface. Unfortunately, these low-rank based methods cannot preserve sharp features well. In addition, because of the multi-patch collaborative mechanism, these low-rank based methods may be computationally intensive sometimes.
More recently, learning-based methods [
35,
36,
37,
38] have been gaining widespread attention, which have an advantage of parameters adjustment-free. Wang et al. [
35] proposed a cascade normal regression (CNLR) method. The relation between the filtered results and the ground-truth was learned by CNLR. The advantage of this novel method is that there was no need to adjust parameters manually to eliminate noises. Generally, the performance of this method is well for small-scale noises, but it is not capable to dispose large-scale noises. In order to retain the geometrical structure features and fine details of the textures, Wang et al. [
36] put forward a two-stage leaning method. Firstly, the face normal relation between the models with noises and the ground-truth was learned and the noises were eliminated by machine leaning. Secondly, the geometrical structure features and fine details of the textures were recovered by machine learning, so as to solve the object blurring issue generated in the first stage. Although these learning-based methods are free of parameter-tuning, and preserve geometric features well, they are highly dependent on the completeness of the training data set.
As we have seen, it is still quite challenging to preserve geometric features while removing noise, especially when the noisy mesh containing complex shapes (e.g., narrow structures, multi-scale features, and fine details). In view of these issues, we propose a guided normal filtering based on adaptive consistent neighborhoods for mesh denoising. The adaptive consistent neighborhoods are constructed by a proposed two-stage scheme. In the first stage, we design a consistency measurement to select the local neighborhood of each face (called the coarse consistent neighborhood) with the most consistent normal orientations. Then, a graph-cut based approach is iteratively performed to construct the final consistent neighborhood without any features contained. By using the constructed adaptive consistency neighborhoods, we can easily get the guidance normal field of the surface for restoring the noisy normal field. Following the guided normal filtering, we reconstruct vertex positions to match the filtered normal field. Taking a noisy mesh as input, our mesh denoising can recover complex shapes of the surface well while removing noise. Specifically, the main contributions of this paper are listed as follows:
A reliable consistency measurement is designed to explicitly select the coarse consistent neighborhood containing the fewest features, thus providing a favorable neighborhood for each mesh face toward features-preserving effect. Then, a graph-cut based scheme is proposed, which can adaptively construct the more accurate neighborhood that does not contain any features. We can use the constructed consistent neighborhoods to compute a more accurate guide normal field.
A guided normal filtering method via the adaptive consistent neighborhoods is proposed to restore the noisy normal field. We show the performance of our method on synthetic data including CAD and non-CAD meshes and a variety of scanned data acquired by the laser scanners and Kinect sensors. Experiments demonstrate that our method outperforms the existing state-of-the-art mesh denoising methods qualitatively and quantitatively.
The rest of the paper is organized as follows. In
Section 2, we detail our guided normal filtering method based on constructing adaptive consistent neighborhoods. Then, our visual and numerical results are given in
Section 3, and we discuss our mesh denoising method in various aspects in
Section 4. Finally, we conclude the paper and give some comments for future work in
Section 5.
2. Methodology
In this section, we first give a brief review of the guided normal filtering, and explain motivations of our mesh denoising method. Then, we introduce our two-stage scheme to construct adaptive consistent neighborhoods for computing a reliable guide normal field. Finally, we articulate the whole framework of our mesh denoising method.
2.1. Background of Guided Normal Filtering
Guided normal filtering [
21] followed by vertex updating is a well developed feature-preserving mesh denoising framework. The key of guided normal filtering is that it provides a robust guidance normal for each face of the mesh. For each face, the guidance normal is obtained by averaging the face normals in a patch that contains the current face. Then, the joint bilateral filtering based on the computed guidance normal field is performed to get the filtered normal
of face
as follows:
where
.
,
,
are the area, centroid, and face normal of the face
in the 1-ring neighborhood
of the face
, respectively.
are the guidance normals of
, respectively.
are the Gaussian functions, where
are variance parameters.
is the centroid of the face
, and
is the Euclidean norm. According to the Equation (
1), the filtered face normals of the surface can be obtained, then we reconstruct vertex positions to match these filtered normals.
Due to a robust estimation provided by the guidance for the true normals of the noisy mesh, guided normal filtering shows the superiority of sharp features preserving and robustness for noise. However, when the mesh contains complex shapes (e.g., narrow structures, multi-scale features, and fine details), they cannot get a proper guidance due to the unreasonable patch selected by the consistency measure
. As we can see the
Figure 1, according to the smallest value of the consistency measure
, the most consistent neighborhood of the face (in purple) is selected, while the neighborhood contains sharp features more so that the guidance of the face (in purple) is not proper. Thus, the filtered results will blur sharp features in these regions. To obtain more faithful results in these regions, we propose a two-stage scheme to construct adaptive consistent neighborhoods for guided normal filtering. The construction pipeline is demonstrated in
Figure 2. In the first stage, inspired by [
21,
39], we newly define a consistency measure to select a coarse consistent neighborhood for each face in a patch-shift manner. Then, a graph-cut based scheme is iteratively performed to adaptively construct different neighborhoods to match the corresponding local shapes of the mesh.
2.2. Coarse Consistent Neighborhood Selection
The consistent neighborhood is the key of recovering geometrical features and fine details in the denoised results from noisy mesh in the guided normal filtering (GNLF) framework. However, in the regions of narrow structures, multi-scale features, and fine details, a reliable consistent neighborhood can not be obtained in the GNLF framework so that the denoised results blur geometrical features and details. To solve this problem, we propose a two-stage method to obtain the adaptive consistent neighborhood for guided normal filtering. Our first-stage method aims to select a coarse consistent neighborhood for each face
from all 1-ring neighborhoods that contain
. If all the faces in the neighborhood of
have similar normal directions, then using this neighborhood to compute the guidance normal at
will gets more faithful denoising results. However, it is hard to obtain this consistent neighborhood of
, especially in the regions of the complex shapes (e.g., narrow structures, multi-scale features, and fine details). As we can see the
Figure 1, the consistent neighborhood that has the similar normal directions of the purple face can not be searched in all 1-ring neighborhoods that contain
by using a consistency measure. For example, by using the consistency measure
, the GNLF method gets worse neighborhoods in which the faces normals directions are disordered. Thus, to solve this problem, a new consistency measure is proposed to search a coarse consistent neighborhood for each face in a patch-shift manner, in which the normal directions of the faces are as similar as possible. In the second-stage, a graph-cut based scheme is iteratively performed in the coarse consistent neighborhood, which adaptively constructs different neighborhoods to match the corresponding local shapes of the mesh.
To evaluate the consistency of the neighborhood of
, the new consistency measure is as follows:
where
is a 1-ring neighborhood of the face
that contains the face
.
is used to measure the flatness of
.
where
is the face number of the neighborhood
, and
is the average area of all the faces in the neighborhood
.
is the average value of the normals of each face in the neighborhood
. The smaller of
means that the candidate neighborhood
is smoother.
is used to measure the similarity of the normal directions between the face
and the neighborhoods
, and the smaller value means more similar.
Thus, the product
can measures the consistency of the neighborhood of the face
well, and the coarse consistent neighborhood
for
in a patch-shift manner [
21] can be searched by using
. As seen in
Figure 1, a neighborhood of the face (in purple) is searched by using
, in which the normal directions of the faces are as similar as possible. Due to the narrow structures, the neighborhood also contains some geometric features. Thus, we perform a graph-cut based scheme in the neighborhood, which adaptively splits some faces in the different neighborhoods to match the corresponding local shapes of the mesh.
2.3. Adaptive Consistent Neighborhood Construction
When the mesh includes complex shapes (e.g., narrow structures, multi-scale features, and fine details), the coarse consistent neighborhood obtained by the first stage may still contain geometric features. In this case, if the coarse consistent neighborhood is used to calculate the guidance normal, which will blur sharp features. So, the second-stage strategy based on the graph cut scheme is proposed to iteratively split some faces from the first-stage consistent neighborhood, which can obtain an adaptive consistent neighborhood that has the more similar orientations with the current face. In each iteration, we firstly build a weighted graph based on the given neighborhood to obtain the indicator vector. Then, we use the indicator vector to bipartition the given neighborhood. Finally, a measure is introduced to judge the rationality of the segmented neighborhood for avoiding over-segmentation. The iterative graph cut scheme for obtaining the adaptive consistent neighborhood is sketched in Algorithm 1, and the main steps are as follows:
(1) Construct Laplacian matrix. As graph construction has a crucial effect on the efficiency of the graph cut scheme, we firstly construct graphs over the given patch for our iterative graph cut scheme. We consider an undirected weighted graph composed of a node set , an edge set E connecting nodes, and a similarity matrix W, where is the number of the node set. W is a real symmetric matrix, whose element in i-th row and j-th column is the weight assigned to the weighted edge connecting nodes and . Our constructed graph is used to spilt some faces in the given patch based on the graph cut scheme, so T in the graph is the face set of the given patch.
In the graph cut scheme, the weight edge that is crossed the segmentation line of
should have a relatively low-value. To this aim, if the faces correspond to
and
which not share a common edge, we set
equal to 0; otherwise,
is set as follows:
where
,
are the face normals of the given patch that correspond to nodes
and
.
is a scaling parameter, which controls the decreasing speed of
. Empirically, we set
equal to 0.8 in our experiments.
is defined as:
where
is a user-specified angle threshold for identifying the geometry feature, which will be discussed in
Section 4. Then, we build the corresponding Laplacian matrix
of the graph
, which can be written as:
where
, and
is the sum of the
i-th row elements in the similarity matrix
W.
(2) Obtain segmented result. According to the generalized eigensystem [
40] of the Laplacian matrix
, the eigenvector
corresponding to the second smallest eigenvalue of the eigensystem is obtained. Each element in the eigenvector
corresponds to a node in the graph
, and the value of the elements represents the geometric distribution of the nodes in the graph
. We sort the elements of the vector
in increasing order, and then we get the sorted vector
.
is a index mapping from the vector
to the sorted vector
, e.g.,
and
. The jump point in the sorted vector
means there is a splitting point in the graph
. In order to find the jump point effectively, we first build the first-order difference vector
. Then, we search the largest value of the vector
and record the corresponding index in the
as
. The index mapping
can be used to obtain the original order in the eigenvector
, and we can use the splitting index
to divide the nodes in graph
into two sets:
Finally, the set containing the current face is selected as the intermediate result of the adaptive consistent neighborhood.
(3) Compute stopping criteria. To avoid over-segmentation, a measure
is proposed as follows:
where
is a small positive number to avoid zero division, and
are the neighborhood of the face
of the last iteration and the segmented neighborhood of the face
, respectively.
is our proposed consistency measure, which is defined in (
2).
measures the consistency change between
and
. A threshold
is introduced to manually tune the segmented degree through (
9). If
, our iterative graph cut scheme continues with the previous segmented result as the input patch; otherwise, the iterative scheme is stopped and the final segmented neighborhood
is outputted.
Algorithm 1: Adaptive consistent neighborhood construction. |
|
2.4. Guided Normal Filtering with Adaptive Consistent Neighborhood
Through our two-stage scheme, the adaptive consistent neighborhoods that contain geometric features as few as possible are constructed to provide a robust estimation of the guidance normals for the noisy mesh. Then, the filtered face normals are obtained by the joint bilateral filtering with the guidance. Finally, the vertex positions are reconstructed to match the filtered face normals. The whole iterative framework is listed in Algorithm 2, and the main steps are as follows. The corresponding pipeline of our method can be seen in
Figure 2.
(1)
Compute face normals of the input mesh. A mesh of arbitrary topology without any degenerate triangle is repesented as
, and the corresponding faces set and vertices set are denoted as
and
, respectively. Here,
and
are the number of faces and vertices in the
M, respectively. For a face
, its outward unit normal can be calculated as:
where
, and
are the positions of the three vertices of the
in a fixed orientation, respectively.
(2)
Compute guidance normals. Through our two-stage scheme, the adaptive consistent neighborhood is obtained for each face
of the input noisy mesh. Then, we can use the neighborhoods to compute the guidance normal
at each face
.
where
are the area, the face normal of the face
in the neighborhood
.
is the adaptive consistent neighborhood of the face
.
(3)
Compute filtered normals. The robust estimation of the guidance for the true normals of the noisy mesh is obtained by our adaptive consistent neighborhood, we can use Equation (
1) to compute filtered normals.
(4)
Reconstruct vertex positions of the mesh. After obtaining the filtered face normal field, we should update vertex positions of the mesh to match the filtered face normal field. To this end, we use a classical vertex updating scheme presented by Sun et al. [
14]. Specifically, we reposition vertex positions
by solving the following minimization problem:
where
is the filtered face normal of
. By using gradient descent to solve the problem (
12), we reconstruct vertex positions by the following iterative formula:
where
is the updated vertex of
.
is the set of mesh faces that share a common mesh vertex
, and
is the number of faces contained in
.
is the centroid of
. More details can refer to the work [
14].
Algorithm 2: Our mesh normal filtering framework. |
|
5. Conclusions
The main novelty and findings of this paper are as follows. Firstly, we design a new consistency measurement to explicitly select the coarse consistent neighborhood containing the fewest geometric features. Then, we further present an improved technique based on graph-cut to adaptively construct the more accurate neighborhood that does not contain any geometric features. By using the constructed consistent neighborhoods, we can calculate an accurate guided normal field. The adaptive consistent neighborhood of each face is built by the proposed two-stage approach, which is the key part of this paper. By constructing the adaptive consistent neighborhood of each face, we can neglect the influence from features in the local neighborhood of the face. Based on the constructed adaptive consistent neighborhoods, we apply the guided normal filtering method to restore the noisy normal vector field. Then, vertex positions are reconstructed to match the filtered normal vector field. Our mesh denoising method preserves geometric features well, and is robust against complex topologies (e.g., narrow structures). We have compared our mesh denoising method with the state-of-the-art methods visually and numerically, and discussed our methods from various aspects.
Although our mesh denoising method performs better than the compared state-of-the-art methods, the CPU cost of our method is intensive. Because our graph cut algorithm is performed sequentially for each mesh face, it can potentially be parallelized by using OpenMP or CUDA in future work. Moreover, we plan to extend our consistency measurement and adaptive guided normal filtering method to point clouds.