Reverse Engineering Closely-Spaced Free-Form Shapes for a Fabric-Over-Body Model
Daniel Chen, Anish Ravindran, Pradeep Vishwabrahmanasaraf
.
DOI: 10.4236/eng.2011.310127   PDF    HTML   XML   6,952 Downloads   10,456 Views   Citations

Abstract

This paper presents a case study of reverse engineering closely-spaced free-form shapes. The raw point cloud data captured from a body scanner was processed to filter most noise and redundancy. They were used to generate meshes through triangulation of points. Upon removal of inconsistencies resulted from residual noise, the clean-up meshes were then used to reconstruct the free-form surfaces that represented a fabric layer and a human body surface. The solid produced between these two surfaces is the fabric-over-body model. It helped generate a FEA (finite-element analysis) mesh with quality checks, such as distortion and stretch, were performed for all the meshed tetrahedral elements. The purpose is to prepare a FEA-ready model for future CFD (computational fluid dynamics) analysis.

Share and Cite:

Chen, D. , Ravindran, A. and Vishwabrahmanasaraf, P. (2011) Reverse Engineering Closely-Spaced Free-Form Shapes for a Fabric-Over-Body Model. Engineering, 3, 1022-1029. doi: 10.4236/eng.2011.310127.

1. Introduction

Reverse engineering has become a viable method of creating a 3D virtual model of an existing physical part for CAD/CAM/CAE applications. The reverse engineering process involves measuring a physical object and then reconstructing it as a 3D model. The physical object can be measured using a number of different technologies which include CMM (coordinate measuring machines), laser scanners, structured light digitizers, or computed tomography. The point cloud data obtained from the measurement usually lack topological information, and are therefore often converted into a more usable format such as a CAD model, a triangular faced mesh, or a set of NURBS (non-uniform rational B-spline) surfaces [1-4].

One of the main objectives of this study was to reverse engineer a pair of closely-spaced free-form shapes. The body surface of a manikin and the layer of cloth that covered the body surface were measured and reconstructed as the free-form shapes. To obtain the scanned surface images, a 3D body scanner was used to scan the manikin with and without clothes, which were processed using the CAD software. Three major steps toward successfully reverse engineering a free-form shape in this study include acquisition of raw point cloud data, processing of raw data, and mesh generation and clean-up. The raw point cloud data, acquired from the 3D body scanner, contained a great deal of noise and redundancy and thus required processing in CATIA V5 [5] to reduce the data size while keeping true to the original shape of the scanned object. Meshes were generated out of the processed point clouds and surfaces later out of the meshes. The automated mesh generation process triangulates the closest three points in the cloud until the entire point cloud is networked to form a triangulated surface. The meshes were cleaned up to remove all inconsistencies such as non-manifold vertices and edges. The free-form surfaces were then reconstructed from these clean meshes.

The other main objective was to develop a fabric-overbody model. The model was defined as the space between the fabric and the body surface, and captured by superimposing the two reconstructed free-form surfaces. However, each of the two surfaces needed to be closed to form a solid before performing a Boolean operation. The model was further developed into a FEA (finite element analysis)-ready model for downstream CFD (computational fluid dynamics) applications in thermo-fluid analysis. The FEA mesh was generated on I-DEAS [6] based on the geometry of fabric-over-body model. Quality checks were used to identify and remove unwanted irregularities, such as the distortion and stretch of tetrahedral elements, in the mesh.

2. Point Clouds

2.1. Acquisition of Raw Point Clouds

The equipment used to acquire the raw point cloud data was a VITUS/Smart 3D Body Scanner by Vitronic [7]. It is designed to produce highly realistic 3D images of the human body based on optical triangulation technology. The purpose of utilizing this body scanner was primarily for rapid digitizing. The laser based non-contact scanner can digitize objects in 11 seconds with an accuracy of 0.1%. Though the finest resolution could only reach one millimeter and it does not scan cavities, it allows denser point clouds be quickly acquired with no more than three scans. To acquire a dense point cloud for this case study, however, a single scan was usually sufficient. This is because the optical triangulation between the chargedcouple device (CCD) camera [8], laser, and manikin can capture all the features where vertical view obstruction is minimized. Figure 1 illustrates the acquired raw point cloud data of the manikin with and without the clothes. They were then exported as ASCII files from the Human Solutions software provided by Vitronic to CATIA V5. Figure 2 illustrates these raw point cloud data in ASCII format.

2.2. Processing of Raw Point Clouds

Because eight CCD cameras were involved in scanning, the raw point cloud data acquired from the body scanner contained multiple patches which were too dense (depicted in Figure 2). They contained a great deal of noise and redundancy that resulted in enormous data sizes. For easier mesh generation and clean-up, both raw point clouds must be processed but kept true to the original shapes of the digitized manikin with and without the clothes. Figure 3 shows the procedure to process raw point cloud data and obtain clean meshes.

  

Figure 1. Raw point cloud data acquired from 3D body scanner.

  

Figure 2. Raw point cloud data in ASCII format.

Figure 3. Procedure to process point cloud data and obtain a clean mesh.

At first, unwanted points were removed from the point clouds. The tools in Digitized Shape Editor workbench of CATIA V5 were utilized to automatically detect and remove any outliers for both raw point clouds. In planar orientation two different point clouds were oriented about the coordinated planes. Multiple clouds can be aligned based on reference points and superimposed over each other using the cloud-to-cloud alignment tool. This technique is also called Intelligent Registration [9,10]. It was achieved by making sure that both point clouds include at least one common feature during digitizing, such as the head (or hand) of the manikin in this case. Two clouds must be accurately aligned before they are clubbed together (cloud union) to become the unified point cloud depicted in Figure 4. To ensure the two point clouds shared the same center-of-axis, the manikin was securely clamped to the platform of the 3D body scanner when the clothed manikin was going through the first scan. The fabric was then carefully cut out prior to the second scan, so that the manikin without the fabric might not shift from its original position due to air movement.

Homogeneous filtering was applied to further reduce noise and redundancy. It used a sphere for homogeneous point removal to thin the point cloud evenly. The sphere started on the first point met and hid all the points inside the sphere. The sphere went to the next remaining point and hid the points that it contained, and so on. In this way the sphere maintained an equal distance, 10 millimeters in this case, between each point. Figure 5 illustrates the completely processed point clouds which were thoroughly cleaned up, leaving little noise before the mesh generation.

3. Meshes

3.1. Mesh Generation

Mesh generation is an automated process of connecting the closest three points to form a triangle. This triangulation of points is repeated until the entire point cloud has been networked to form an unambiguous, coherent, and consistent triangulated surface [11,12]. The noise over a mesh surface depends on how clean the processed point cloud data is prior to triangulation. Problems may occur in the generated mesh due to irregularities in the imported data. They include non-manifold vertices and edges, redundant and acutely angled triangles, triangles with inconsistent orientation, etc. Due to residual noise left over from the processed point clouds of the manikin

Figure 4. Unified point cloud.

  

Figure 5. Processed point clouds of manikin with and without clothes.

with and without the clothes, the initial mesh generation from them had some inconsistencies.

3.2. Mesh Clean-up

Figure 6 illustrates the generated mesh with and without these inconsistencies. The left side shows that the mesh is not a clean mesh, because it has holes, non-manifold vertices, and non-manifold edges. As shown in Figure 3, mesh cleaned-up is the step to be carried out after the mesh generation. In mesh clean-up, the “mesh cleaner” tool in the Digitized Shape Editor workbench was used to clear non-manifold vertices and edges. In addition, any remaining unwanted triangles are interactively removed prior to hole filling. The hole filling was done automatically using surface information or volumetric algorithms. Smoothing is a step towards refining the mesh, so the surfaces can be reconstructed for better quality and greater accuracy. The “mesh smoothing” automatic tool requires user input and its effect is global.

Decimation and optimization were performed next. They were done interactively to obtain desired accuracy and add sharpness to the clean mesh. The decimation significantly reduced the number of triangles over low curvatures or flat areas, but maintained a high triangular count over high curvature regions, via curvature-based sampling. The optimization adjusted the edge length of triangles to a specified range, and recognized the adjacent edges of a triangular fan to a specified angle. Figure 7 shows the clean meshes of the manikin with and without the clothes at the completion of decimation and optimization.

4. Free-Form Surfaces and Fabric-over-Body Model

Figure 8 illustrates the free-form surfaces reconstructed from the clean meshes of the manikin with and without the clothes using the Quick Surface Reconstruction workbench. Each surface thus created was closed using the close surface tool to generate a solid. To produce the fabric-over-body model, the solid of manikin without clothes was then used to cut the one with. As illustrated in Figure 9, this Boolean operation (union trim) generated a solid fabric-over-body model defined by the space between the fabric and body surface. In Figure 9, highlighted areas of A and B have no air volume, because the fabric surface is touching the body in these areas.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] J. Bohm and M. Pateraki, “From Point Samples to Sur-faces—On Meshing and Alternatives,” Image Engineering and Vision Metrology’ Proceedings of the ISPRS Commission V Symposium, Dresden, 25-27 September 2006. http://www.ifp.uni-stuttgart.de/publications/2006/boehm_pateraki06_ISPRS_CommV_Dresden.pdf
[2] K. Soni, D. Chen and T. Lerch, “Parametrerization of Prismatic Shapes and Re-construction of Free-Form Shapes in Reverse Engineering,” International Journal of Advanced Manufacturing Technology, Vol. 41, No. 9, 2009, pp. 948-959. doi:10.1007/s00170-008-1550-1
[3] D. Rogers, “An Introduction to NURBS with Historical Perspective,” Morgan Kaufmann Publishers, San Francisco, 2001.
[4] A. Wulamu, M. Goetting and D, Zeckzer, “Approximation of NURBS Curves and Surfaces Using Adaptive Equidistant Parameteriza-tions,” Tsinghua Science & Technology, Vol. 10, No. 3, 2005, pp. 316-322. doi:10.1016/S1007-0214(05)70075-4
[5] Dassault Systems, CATIA V5, 2011. http://www.3ds.com/products/catia/portfolio/catia-v5/catia-v5r21/
[6] Siemens NX, I-DEAS, 2011. http://www.plm.automation.siemens.com/en_us/products/nx/
[7] Vitronic, VITUS/Smart 3D Body Scanner, 2011. http://www.vitronic.de/en/bodyscannen/complete-body-scanning/
[8] SearchStorage.com, Charge Coupled Device Camera, 2011. http://searchstorage.techtarget.com/sDefinition/0,,sid5_gci295633,00.html
[9] T. Grimm, “Reverse Engineering: Magic, Mystique, and Myth,” Desktop Engineering, Vol. 10, No. 12, 2005, pp. 32-37.
[10] Rapidform, Intelligent Registration, 2011. http://www.rapidform.com/57
[11] D. Gibson, “Parametric Feature Recognition and Surface Construction from Digital Point Cloud Scans of Mechanical Parts,” Thesis, University of Oklahoma, Norman, 2004.
[12] P. Waterman, “3D Data at Work,” Desktop Engineering, Vol. 9, No. 11, 2004, pp. P18-P23.
[13] J. Cabello, “Toward Quality Surface Meshing,” Proceedings of 12th Meshing International Roundtable, Santa Fe, 14-17 September 2003, pp. 201-213.
[14] A. Fabri, P. Al-liez and M. Yvinec, “Triangulations and Mesh Generation in Computer Geometry Algorithms Library,” CGAL Day of the 10th Anniversary of LIAMA, Beijing, 19 January 2007. http://www.cgal.org/Events/Liama_Beijing_2007_documents/TriangulationsInCGAL.pdf
[15] S. Oudot, “Delaunay Triangulation,” Course Handouts, Stanford University, Palo Alto, 2011. http://graphics.stanford.edu/courses/cs368-06-spring/handouts/Delaunay_1.pdf
[16] L. H. Beni, M. A. Mostafavi, J. Pouliot and R. Therrien, “Developing an Adaptive Topological Tessel-lation for 3D Modeling in Geosciences,” Geomatica, Vol. 63, No. 4, 2009, pp. 419-431.
[17] F. Aurenhammer and R. Klein, “Handbook of Computational Geometry,” Amsterdam, 2000.
[18] L. Chen and J. Xu, “Optimal Delaunay Triangulation,” Journal of Computational Mathematics, Vol. 22, No. 2, 2004, pp. 299-308.
[19] M. Lawry, “I-DEAS Student Guide,” 2nd Edition, McGraw-Hill, New York, 2004.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.