Interfacing with NetGen

blobfish

New member
Is there something you used from OSG? @blobfish
yes undercut

One of the things I wonder about, is the 2 vertical walls that are orthogonal to test direction. These 2 are accessible but no 'hits', so how to determine whether an orthogonal face is accessible? Maybe for those faces, your approach is used where the ray start on the face and project in the opposite direction?
 

Quaoar

Administrator
Staff member
One of the things I wonder about, is the 2 vertical walls that are orthogonal to test direction. These 2 are accessible but no 'hits', so how to determine whether an orthogonal face is accessible? Maybe for those faces, your approach is used where the ray start on the face and project in the opposite direction?

Here is an excerpt from the spec:

The [face accessibility] algorithm is outlined as follows:
  • A part is tessellated with CAD-oriented meshing algorithm (BRepMesh of OpenCascade or NetGen) to obtain a piecewise approximation of the initial curved geometry. The values of the linear and angular deflections required by the tessellation algorithm are selected automatically.

  • [Optional] The mesh is refined to achieve better skewness (aspect ratio or scaled jacobian) criteria and size threshold. The refinement is necessary to achieve the better accuracy of the accessibility analysis. Some simple techniques can be used, e.g., edge collapsing, edge split, edge swapping. Also, midpoint refinement and triangle subdivision can be employed at this stage.

  • The BVH structure is constructed for fast ray-triangle intersections. One important modification to BVH traversal procedure available in Analysis Situs consists in avoiding heap memory usage.

  • In each BVH facet, the index of the corresponding CAD face is preserved. The indices of facets could be preserved, but they should not address the corresponding elements from BVH as those indices are reordered dynamically (that is how BVH works). To keep track of the occluded triangles, we can store them directly with the coordinates, although that is not a memory-efficient solution.

  • The algorithm accepts a tool direction (axis) and associates with each triangle the reversed direction as a ray (bundle) to emit. The ray source is slightly shifted in the direction of the local normal vector at the corresponding facet. Therefore, if a facet is located next to a negative feature (e.g., hole) in the direction of inspection, an intersection point will not be detected.

  • For each facet’s ray, the intersection test is conducted. If a facet happens to be occluded, it is recursively subdivided with the limited subdivision depth. The recursive subdivision technique allows to slightly refine the large facets contributing to the occluded regions without modification of the initial mesh:
jI6vuWF8YSSZmMHm3-Bl6AdUTV21eI-lZkzNgdnqRMnWIBnrxOA4pSY655mG0QtNSalv9JUENeiEJ1yLX1VFCv5YQD9IYpmDop7vCqjbvgQfBPMTaOYrHOgMLqQ2mA=s0

  • For all faces, the total “score” of tests, hits and void intersections is counted. If the number of intersections by the total number of tests is high enough (e.g. 95%), the face is considered non-accessible. At this stage, all faces can be classified as Fully Accessible, Partially Accessible or Inaccessible.

  • The results of analysis are exported to WebGL-compatible file format (glTF).
The algorithm is performed for all the input directions that are supposed to be the principal machining directions. If a single face F is detected as an accessible face for at least one direction, it is deduced to be accessible in general, so it goes to the result.

So, yes, the rays are emitted from the facets in the direction opposite to the tool axis. Note also that there's a slight shift of the origin point aimed at avoiding false-positive intersections.
 

blobfish

New member
So, yes, the rays are emitted from the facets in the direction opposite to the tool axis. Note also that there's a slight shift of the origin point aimed at avoiding false-positive intersections.

I was contrasting our approaches. I am creating a plane on the bounding sphere with the size of the bounding sphere radius. On this plane I create a uniform grid of points to fire rays into the mesh. With that method, the 'vertical' walls of mesh are indeterminate. With your method, the vertical walls are determinate. With my method a non-uniform BRepMesh will work fine. With your method you have to have a uniform mesh(thus netgen) or create multiple points on the bigger triangle faces. I still feel like this is a solved problem with raytracing. Have you looked into occt newer visual ray tracing?
 

Quaoar

Administrator
Staff member
With your method you have to have a uniform mesh(thus netgen) or create multiple points on the bigger triangle faces.
The customer wanted to have colored meshes in the output glTF. Or it was me who proposed that and then the customer started to want this :D

I still feel like this is a solved problem with raytracing. Have you looked into occt newer visual ray tracing?
It kind of is. I'm not that aware of any details of the OpenCascade's ray tracing algorithm, but I'm using their BVH data structures, which are super-fast to construct. It's no surprise as this stuff was developed by a couple of very advanced folks, one of them having a Ph.D. in ray tracing. They now run this Light Tracer company: https://lighttracer.org/
 

blobfish

New member
The customer wanted to have colored meshes in the output glTF. Or it was me who proposed that and then the customer started to want this :D


It kind of is. I'm not that aware of any details of the OpenCascade's ray tracing algorithm, but I'm using their BVH data structures, which are super-fast to construct. It's no surprise as this stuff was developed by a couple of very advanced folks, one of them having a Ph.D. in ray tracing. They now run this Light Tracer company: https://lighttracer.org/
I got the cadrays program working on my debian box, but the graphics of the entire application was FUBAR. That was probably just my system. I don't buy graphics cards and run the prop. drivers on them. I also tried to create a test program that used occt viewer with ray tracing, but like most things occt, it turned into a huge time sink and I moved on.

Light tracer has some really impressive renders!

I had never heard of glTF. Blender does support it, so I made a glTF of my example. online glTF viewer. I have spent some time in blender and that has given me some intuition of how 3d graphics work.
 

Attachments

  • mytest.gltf.zip
    142.1 KB · Views: 1

Quaoar

Administrator
Staff member
I had never heard of glTF.
I use it because it's quite suitable for www and literally all my clients want to visualize shapes using three.js library that supports glTF natively. Other than that, glTF is quite horrible to me. Maybe it's good for low-level rendering using OpenGL or WebGL though. Btw, CAD Assistant opens your model nicely:

1633541369385.png

If I remember well, glTF appeared in OpenCascade 7.5 first. Then we (me and Julia @JSlyadne) copied and pasted it to Analysis Situs to add support of colored feature faces, edges, and customized scene hierarchy. OpenCascade tends to evolve into a weird hybrid of visualization engine plus modeling, so sometimes copying and pasting stuff is the only way to make use of interesting functionality :D
 
Top