Storing a triangulation as a list consisting of three indices per triangle is a common task and leads to a frequently asked question that reads as follows: *“Creating a triangulation with Fade is fast. But when I store a face list of vertex indices it runs forever. What is wrong?”*.

The basic answer: It is the asymptotic time complexity of your output loop. The following three demo functions have all the same result. See what happens:

## The bad function

Let’s start with bad code: A loop iterates over all triangles and their corners. In each step another loop visits all input points to find the right index. The output of this function is correct, but the asymptotic runtime is O(n²) which means roughly that a qudratic number of steps must be carried out for n triangles. Consequently the time consumption grows quadratically, i.e., everything seems fine for a 50 triangles mesh but you can expect excessive runtime for 100 000 triangles.

void badMethod(vector<Point2>& vPoints) { Fade_2D dt; dt.insert(vPoints); // BAD CODE FOR DEMONSTRATION - don't do it like that! vector<Triangle2*> vAllTriangles; dt.getTrianglePointers(vAllTriangles); std::ofstream f("out2.txt"); timer("badMethod"); for(vector<Triangle2*>::iterator it(vAllTriangles.begin()); it!=vAllTriangles.end();++it) // Outer loop { Triangle2* pT(*it); for(int i=0;i<3;++i) { Point2* pCorner(pT->getCorner(i)); for(size_t i=0;i<vPoints.size();++i) // Inner loop { if(vPoints[i]==*pCorner) { f<<i<<" "; break; } } } f<<"\n"; } timer("badMethod"); f.close(); }

## A fast solution using a map

Here is a better solution that can be used for this and similar cases: First a map container is filled with vertex pointers along with their indices. Then a loop iterates over all triangles and fetches for each corner the associated index from the map. Searching in the map takes O(log(n)) time, so the overall time consumption is only O(n*log(n)). Well, right, 3*n*log(n) but that’s the same, you may google “Big-O Notation” for the details. This is fast enough for any practical input.

void mapMethod(vector<Point2>& vPoints) { Fade_2D dt; vector<Point2*> vHandles; dt.insert(vPoints,vHandles); vector<Triangle2*> vAllTriangles; dt.getTrianglePointers(vAllTriangles); map<Point2*,int> mVertexIndex; for(size_t i=0;i<vHandles.size();++i) { Point2* pVtx(vHandles[i]); mVertexIndex[pVtx]=i; } std::ofstream f("out1.txt"); timer("mapMethod"); for(vector<Triangle2*>::iterator it(vAllTriangles.begin()); it!=vAllTriangles.end();++it) { Triangle2* pT(*it); for(int i=0;i<3;++i) { Point2* pCorner(pT->getCorner(i)); f<<mVertexIndex[pCorner]<<" "; } f<<"\n"; } timer("mapMethod"); f.close(); }

## Even faster: Use custom indices

Fade provides the possibility to store in each point an arbitrary integer, called the custom index. The code below demonstrates that: It sets the custom indices, then it iterates over the triangles and their corners and prints the indices. The asymptotic runtime of the output loop reduces to O(n) and this is optimal.

void customIndexMethod(vector<Point2>& vPoints) { Fade_2D dt; for(size_t i=0;i<vPoints.size();++i) { vPoints[i].setCustomIndex(i); } dt.insert(vPoints); vector<Triangle2*> vAllTriangles; dt.getTrianglePointers(vAllTriangles); timer("customIndexMethod"); std::ofstream f("out0.txt"); for(vector<Triangle2*>::iterator it(vAllTriangles.begin()); it!=vAllTriangles.end();++it) { Triangle2* pT(*it); for(int i=0;i<3;++i) { Point2* pCorner(pT->getCorner(i)); f<<pCorner->getCustomIndex()<<" "; } f<<"\n"; } timer("customIndexMethod"); f.close(); }

## Runtime Comparison for the three Functions

We have three functions that output exactly the same but with different time complexity. How important is the difference in practice? The table below compares the time consumptions of the output loops in seconds.

Number of triangles | 1000 | 10000 | 100000 | 1000000 |

Bad O(n²) function | 0.014 | 0.481 | 42.663 | N/A |

Better O(n*log(n)) solution | 0.003 | 0.025 | 0.121 | 1.338 |

Custom Indices, O(n) | 0.000 | 0.007 | 0.043 | 0.220 |

The numbers in the above table do not exactly reflect the asymptotic time complexities because harddisk access is involved. But the table shows certainly that the two proposed output loops are the better choice.