How to optimize 3D CAD models for maximum performance?

How to optimize 3D CAD models for maximum performance? Tags: VR, Dev tools, Knowledge space

The abbreviation CAD is based on Computer-Aided Design, or humanly, the use of a computer for construction and design work. So in general we can call any graphic 3D model a CAD model, and any 3D editor a CAD software.

One of the first and most widely used CAD software among professionals is AutoCAD from an equally important company Autodesk. Since the release of the first Autocad in 1982, the company has delivered Dozens of other professional products. construction and design CAD software for various industries, especially engineering, construction and architecture. Of the more modern ones, I would mention tools for animation, special effects or photogrammetry.

So the question is not when you will encounter CAD models when developing applications or games for virtual reality, but whether you work with them as efficiently as possible. Whether you use professional software from Autodesk or prefer cheaper alternative, the software is often designed for a specific purpose - modeling in high dimensional accuracy, high shape accuracy, generating production drawings and other documentation, performing accurate strength calculations, rendering photorealistic visualizations and other purposes that often contradict any optimization of the models for seamless real-time display e.g. in less powerful devices.

But.. What if is it necessary to work with CAD models other than on the basis of development? For example, to use them in an interactive application in virtual reality, maintenance applications in augmented reality, or just in a web catalog? For all of these applications, you are limited by the performance capabilities of the platform and device on which the model is being displayed. In this case, you will probably deal with the performance optimization of the CAD model. This article will advise you how to process manual optimization as well as how to automate it.

Textures and polygons of CAD model, Draw calls

If we look at any 3D computer model, its appearance can be simply defined as a system of consecutive polygons forming a mesh defining the shape of the object and textures defining its appearance (surface and color). The render performance of a given object is then a function of both. Simplified:

  • Polygons

    A polygon is a closed shape determined by a definitive number of lines adjoining each other at the vertices of the polygon. If we fill the polygon, we get a face. With 1 polygon, we are able to reach a plane - if we want to model any shape, it is necessary to stack more polygons next to each other, in a so-called polygon mesh. Those vertices of the polygon are then simultaneously consolidated vertices of the mesh. The more articulated the surface of the model, or we require a more accurate calculation leading to a better modeling of deformations, the more vertices are required to record all the articulations.

    In principle, The complexity of the model display is a function of the number of vertices, edges and faces of the model. For simplicity we usually talk about number of polygons only, because those vertices, surfaces and edges are a function of it. The established standard is the division of models (based on the number of polygons) into "Low poly" and "High poly", unfortunately this designation can be misleading for several reasons, generally the most basic:

    • The Low poly designation is not strictly limited by the maximum possible number of polygons. Models with less than 5 to 10 thousand polygons are generally considered low poly, but technically a low poly model is one that achieves the desired shape with as few polygons as possible, regardless of their final number.
    • The number of polygons is strongly dependent on their geometric shape. In principle, it should always be based on the basic polygon - the triangle. After all, any shape may be decomposed into triangles and thus graphics card does calculation right above this format. Unfortunately, graphics programs, including Unity, allow also the use of non-triangle meshes, such as squads. And if, for example, a rectangular grid is used, the number of polygons of that grid is seemingly half that of triangles.

    When assessing the performance of a model render, you can look for more relevant information instead of the number of polygons, namely the number of verticles of the mesh. Each such vertex is assigned an exact position within the XYZ space and its resistance to change based on the stress at that vertex of the mesh(for example, to calculate the deformation during a conflict with another body). It is their number that is crucial in the graphic rendering, where the position and shape of the entire body is monitored and calculated in real time based on the position of the verticles of the mesh, or individual polygons. The higher the number of vertices in our 3D model, the more points the GPU must go through and recalculate them for a new position during its rendering to correctly determine its shape, motion, and deformation.

    Extreme in the number of polygons are the models used for FEM analyzes in mechanical engineering. Here, however, the triangular mesh is most often formed in specialized software for performing given strength calculations (the stress is determined at each point at a defined load and then compared with the maximum allowable stress in the material for a given type of stress) - if we get a model that has a very sparse mesh of polygons (chosen according to the segmentation of the component) and the required accuracy of the calculation in individual places), it is advisable to ask for a model in the phase before the application of this mesh.

    Where to find the number of polygons, vertices, areas and other parameters of the model?

    The number of polygons and related information can be found through any 3D modeling and rendering software, for example:

    • Blender 3D modeling software: In Blender, model statistics are always available at the very bottom right of the program information bar (The bar must be manually activated in Blender 2.9 and above). Here you can find the number of vertices (Verts), edges (Edges), surfaces (Faces) and the number of triangular polygons (Tris).
    • Unity game engine: In the Unity game engine, you can get total statistics about the rendered objects for the camera at any time - you can find it in the "Game" window, under the "Stats" tab. Unity shows the number of triangular polygons under the "Tris" key and the number of vertices of the mesh under the "Verts" key. When measuring, make sure you have the entire object (all whole objects) in the camera's display field for which you want to find the number of polygons and vertices.
  • Textures

    With the texture we can imagine a cut-out image, which completely wraps the net defining the shape of the body from the previous point. The image reflects the appearance of the specific part it covers. If we wrapped the cube, we would have a texture in the shape of a "jumping doll" that children have used to jump on the street, or a 2D world map in a case of the globe..

    In the simplest case, a given texture is just "wrapping paper" of one color. However, if we want to achieve a truly realistic look, we use "to wrap" not only high-quality photos, but also multiple layers of texture to achieve a realistic look, including reflections.

    In order for the texture deployed in this way to be actually tied to the mesh defining the object, the verticles forming the polygons from the previous point have the U and V coordinates in addition to the XYZ coordinates, which say which specific texture point should lie at that particular object body vertex point. This principle is called "UV mapping".

    Why is choosing the right textures important?

    If we open the same image with different compression on a computer, we will find that the higher the number of its pixels and the associated size of the source file, the longer it will take to load. The same is even more true for textures in applications - the higher the demands on their display on a particular object, the higher the demands for running the application.

  • 3D Model Structure and Draw Calls

    During the running of the game / application, only objects in the field of view are rendered in the game engine and then in the game / application. Thanks to this, a multiple improvement in performance is achieved. The problem occurs with larger scenes and models - such as large buildings. If we import such a model as a whole, regardless of where we will be in the building / what part of it we will see, the application will always render the whole object (the application itself is not able to distinguish and render only a specific part of the whole model). Conversely, if the same object is composed of thousands of subparts, despite the rendering of only the visible ones, the high numbers of Draw Calls associated with their rendering will jump (Draw Call = recalculation of the position and appearance in the CPU subsequently rendered in the GPU, performed separately for each material , more information in the text below). At the same time, Draw Calls are one of the most demanding operations, a low number of Draw Calls is many times more important than a low number of polygons . For optimal function, it is necessary to find a compromise - grouping individual parts of the scene into logical units.

What does an optimally optimized CAD model look like?

An optimally optimized CAD model is one that is displayed using the lowest possible computational power while maintaining all the associated needs in the application.

Principle of optimization of the number of polygons

When considering reducing the number of polygons, we must first clarify two fundamental questions:

  1. How high dimensional accuracy must the optimized CAD model achieve, or how large in dimensional and shape deviation can be varied?
  2. If a given model is to be destructive in an application (= eg destruction of its shape after an impact), how accurate and high-quality must this destruction be?

As explained earlier, the shape of an object is always defined by the vertices of polygons, the position of which is recalculated in real time while the application is running. The more of these points we have on the surface of the object, the more precise shapes and dimensions can be achieved in the initial and destructive state.

From the point of view of the principle of calculations performed by the GPU, it is optimal if the individual polygons are of a similar shape and the largest possible size (= smaller number of polygons of the model) while maintaining their function. To eliminate unnecessary calculations, we should also avoid situations of unnecessary calculations processed in shaded places, for more information on the given Google issue the term "overshading".

If we optimized the number of polygons manually, which would be enormously time consuming, boring, erroneous and sometimes silly, we would always look at a specific group of polygons and optimize it for the optimal number of polygons, depending on the specific location on the object and its functionality. In practice, when we need optimization at the expense of accuracy, we can use a "hack" with a normal map - assuming that what shape we "see" is more important than what it actually is. Simply put, we transfer the details of the polygons to a normal texture map and use a smaller number of larger polygons instead of a larger number of small polygons. Of course, the shape processed by the normal map appears to be flat when viewed in 3D. Thus, if we work in 3D and VR, the alternative is slightly more demanding parallax mapping, which is created on the principles of normal maps, but contains an additional depth layer.

Anyway, related the optimization by creatng an illusion of a shape of objects by additional layers of texture, there is a quote of Timothy Lottes (Epic Games, Nvidia) who allegedly uttered [in VR] textures are virtually useless because they look like images painted on toys instead of real geometry..

Texture optimization for faster rendering

If we look at the types of textures, we find 3 types of textures - raster texture, procedural texture and vector texture. The graphics in each of these textures are created in a different 2D graphics editor, and the already created graphics are not transferable to another type of texture.

The optimal solution, if possible, is to use vector graphics. However, the much more used graphics are still raster - for example, all images created in Paint, Photoshop or Gimp, in short, those with extensions .jpg, .png and the like.

  1. Pixel texture size (texture resolution)

    The raster image is defined by individual pixels, each pixel has a size of 1 bit. A texture measuring 100x100px therefore has 10,000 bits, or 10,000 bits / 8 (conversion to bytes) / 1000 (conversion to kilobytes) = 1.25kB.

    100x100px (10,000 pixels) - Is it low, high, or ideal size for a texture? No one will answer you - it depends on the specific texture, its distance from the camera and the amount of RAM of the target device.

    In general, with active map filtering and antialising, less RAM is required to render a lower-resolution texture, but the rendering time is higher due to more calculations to smooth that texture.

  2. Color channels

    In order for a pixel to be visible, it must have certain color information assigned to it. The natural color format for display on computer and television screens is the RGB format, which includes all shades of color that can be created by combinations of red (R), green (G), and blue (B). The individual colors of the RGB format (ie R, G and B) are called channels. The existence of the channel has a size of 1 bit, so each bit from the previous point contains another 3 bits of the color channel. In practice, we often require the possibility of setting the transparency, available under the Alpha channel, ie A. This creates the RGBA format, and with it the requirement for one more bit.

    The original texture after assigning the RGB format suddenly has a size of 10,000 bits * 3 = 30,000 bits = 3.75kB.

  3. Color depth

    With the existence of color channels, we can finally work with colors. The total range of available colors for 1 pixel is defined by the color depth in bits. A depth of 1 bit is sufficient for a black and white image. Furthermore, from the principle of dependence on the number of channels, it grows exponentially.

    • With a color depth of 8 bits and the use of the RGB color format, 256 color combinations are available (the range for 8 bits is 0 to 255, so for 2 channels with duplication elimination, 28 = 256 If we multiply this number by 3 channels (2563), we get 16.7 million possible RGB combinations.

      Images with 8 bits per channel are often referred to as 24 or 32 bit images. This is just a different marking, where we first talk about the depth per channel (ie 3 channels of 8 bits = 24 bits) and then the depth of the whole image, ie again 24 (RGB) or alternatively 32-bit variant including transparency channel Alpha (RGBA format).

      At a color depth of 8 bits per channel, our initial texture has increased in size to 30,000 * 8 = 240,000 bits = 30kB.

    • A color depth of 16 bits per channel when using the RGB color format (48-bit image) allows for finer color settings compared to 24-bit image. At 8 bits of depth per channel, 256 shades of one color are available (= number of colors per channel), while at 16 bits it is already 65536 available shades per color, so a total of 281.5 * 1012 color combinations is created with three channels.

    The 16-bit depth can be used during various editing adjustments (applying filters, adjusting contrasts, etc.), during which the accuracy of color information may be lost when using the 8-bit depth. However, a maximum depth of 8 bits is always used for the final build, which is fully sufficient within the resolution of the human eye).

  4. Size compression

    The size of the texture file can also be reduced by compressing it, which is simply by reducing the amount of data we store with the image file. This can, of course, be at the expense of its quality.

    Compression types
    • Lossless compression - the compression algorithm reduces only the data information that does not affect the image quality. This allows you to repeatedly convert and re-save the image without losing quality. A typical lossless compression format is .PNG.
    • Lossy compression - the compression algorithm uses approximation and partially discards all data regardless of image degradation. The compression value can be set by the user. A typical compression output format is .JPG.

    Images typically include metadata. This can be the author's name, the location of the image, the editing software used, and so on. Metadata is textual information of very small size and has almost no effect on performance. Still, deleting is easy - their presence in the exported image is selected by a check mark when exporting from a graphics editor.

  5. The number of textures assigned to the CAD model

    When rendering an object, communication takes place between the CPU renderer and the GPU, in principle the CPU recalculates specific parts (object, texture ...) and sends a request to the GPU to display / change it. So if we have only one texture on the object, only one request will come to the GPU. However, if we have our object segmented, with different textures on different parts, more requests come to the GPU at one time, which causes GPU to be more busy. The number of required connections between the GPU and the renderer in terms of material settings is indicated by the "SetPass calls" key, "DrawCalls" is then weighted for each item sent.

An optimally optimized texture is one that has the desired graphic quality at the lowest possible size and the fastest possible render. If we are developing for multiple devices, with diametrically different computing power, we may have to solve textures for each device individually in a suitable way to achieve the maximum possible appearance quality.

Manual optimization of CAD models

The basic principles that an application developer needs to know for developing optimized applications are described above. Although the list may seem to be long, with the right habits the principels are being applied right in the process of creating, ajdusting and choosing content resources and thus require less time that it could seem at the first sight. Constructional, informational and other data associated with imported 3D models can be a more challenging problem to solve - these must either be removed or, if visual preservation is required, converted to normal maps. However, the biggest time problem occurs when the default models change during the development and operation lifetime process of the application - in this case, each time the default model is modified, its performance optimization would have to be repeated. Such a process is quite tedious and time-inefficient, in short, optimal for automation.

Still, here are a few tips for quick basic optimization:

  • The number of model polygons can be reduced in any 3D modeling software. In Blender, this is possible via the "Decimate" modifier.
  • The number of GPU queries needed to render objects (SetPass calls) can be reduced by baking lightmaps.

Automatic optimization of CAD 3D models

If there is a need of the ability to work time-efficiently with CAD models in the game engine or the ability to optimize performance in a short time or optimize continuously larger number of CAD models, automation of this process can be very beneficial. If we look at the solutions available on the market, we will find several products. The decision then depends on the intended use of the models and the game engine used. If the goal is to continue working with the models in the Unity game engine, it may come in handy Unity Pro + PiXYZ Plugin + PiXYZ Studio directly from the creators of this engine, or the tool Unity Reflect. Using the package can greatly simplify and speed up the conversion of CAD models to that game engine, but it's not free - the annual single-user license is $2,100 for Pixyz Studio, $1,150 for the Pixyz Plugin, + of course $1,800 for Unity Pro license. For the Unreal game engine from Epic Games, there are free tools Datasmith for importing Cad models, and Twinmotion. If you're interested in the performance difference between Unity's PiXYZ and Unreal's DataSmith, you can check out the report from Tata Elxis published by Unity.

If you insist on Unity, a more general and cheaper solution with a price tag of $99.9 per month is Substrance3D by Adobe.

Continuation of the automation section in this article once I will find free time to write it.

The following optimizations in the game engine

  • Suitable settings for "Levels of Details" (LODs) and "Hiearchical Level of Details" (HLODs) for maximum reduction of Draw Calls. LOD (+ culling) ensures the reduction of the number of polygons and the reduction of the quality and number of displayed materials based on the distance of the rendered object from camera. HLOD groups models, combines materials and textures, and replaces multiple models with static meshes - all again to reduce Draw Calls. The advantage of HLODs is also a non-destructive way of functioning (modification of objects does not affect the setting of HLODs).
  • To further reduce the number of Draw Calls, merge suitable meshes together, as well as think about Instanced rendering.
  • Minimize repetitive calculations - perform calculations only when needed, not for each frame. Configure triggers through events instead of conditions controlled in each frame. Rotating graphics can be shaded instead of recalculated. Work with shader calculations on lower performance devices based on vertices instead of pixels.
  • "Bake" and "Cull" everything possible if it is not already baked from 3D modeling software

Finally - check the result

After thorough optimization, it should be possible to display the CAD model in any application, always in the highest possible quality related to the performance of the device. If this doesn't happen, it's time to look at Profilling - the complexity of displaying individual parts of the model and the entire application in general.

Profiling and optimization in Unreal Engine 4