## Fun With Math in 3D

This post is about the relative efficiencies of two different approaches to calculating triangle area in three dimensions.

Atalasoft has an add-on that can decode and rasterize DWG CAD files.  I’ve been doing some playing with the underlying data structures for a little R&D.  In that process, I was looking at a test for collinearity – are three points on the same line?  There is a slick way to test that by considering the triangle formed by the three points.  If the triangle has a zero area then the triangle is degenerate, and therefore the points are collinear.

To do this, I looked a basic triangle area formula: A = 1/2 base * height.  In this case, I found a version in C from John Burkhardt which takes one side as the base and calculates the height by projecting the apex onto the base vector and getting that length.  This is very straight forward – if you compare the code, you’ll note that I made some changes to make it feel more C#ish as well as a trivial optimization for a zero length base (this could be somewhat better by comparing the p1 and p2 for equality before calculating distance).

```    private static double EuclideanDistance(Point3D p0, Point3D p1)
{
double dx = p1.X - p0.X;
double dy = p1.Y - p0.Y;
double dz = p1.Z - p0.Z;
return Math.Sqrt(dx * dx + dy * dy + dz * dz);
}
public static double TriangleArea(Point3D p1, Point3D p2, Point3D p3)
{
// ported from: http://orion.math.iastate.edu/burkardt/c_src/geometryc/geometryc.c
double dot = (p2.X - p1.X) * (p3.X - p1.X) +
(p2.Y - p1.Y) * (p3.Y - p1.Y) +
(p2.Z - p1.Z) * (p3.Z - p1.Z);
double lbase = EuclideanDistance(p1, p2);
if (lbase == 0.0)
return 0.0;
double alpha = dot / (lbase * lbase);
double a = p3.X - p1.X - alpha * (p2.X - p1.X);
double b = p3.Y - p1.Y - alpha * (p2.Y - p1.Y);
double c = p3.Z - p1.Z - alpha * (p2.Z - p1.Z);
double height = Math.Sqrt(a * a + b * b + c * c);
return lbase * height * 0.5;
}```

A rough estimate of the work on this is 16 multiplications/divisions, 2 square roots, and 24 additions/subtractions.

Then I implemented a version from Wolfram’s Mathworld which uses determinants to get the area.  In this case, I had to go a little further and do the determinant calculation code, but that’s not so bad.

```    private static double Determinant(double x1, double y1, double x2, double y2, double x3, double y3)
{
return
x1 * (y2 - y3) +
x2 * (y3 - y1) +
x3 * (y1 - y2);
}
public static double TriangleArea1(Point3D p1, Point3D p2, Point3D p3)
{
// from http://mathworld.wolfram.com/TriangleArea.html
double det1 = Determinant(p1.Y, p1.Z, p2.Y, p2.Z, p3.Y, p3.Z);
double det2 = Determinant(p1.Z, p1.X, p2.Z, p2.X, p3.Z, p3.X);
double det3 = Determinant(p1.X, p1.Y, p2.X, p2.Y, p3.X, p3.Y);
if (det1 == 0.0 && det2 == 0.0 && det3 == 0.0)
return 0.0;
return Math.Sqrt(det1 * det1 + det2 * det2 + det3 * det3) * 0.5;
}
```

And a rough estimate of the work is 13 multiplications, 1 square root, and 17 additions/subtractions.

After some quick unit testing, the area routines return identical results.  I tested each routine with 10,000,000 iterations from within AQTime and found that TriangleArea1 takes 4.63 seconds and TriangleArea takes 6.8 seconds.  So the geometric method takes roughly 1.47 times longer than the determinant version.

The next question I had was is there a way to compare these algorithms strictly by the approximate computational cost rather than measurement?  To do this I again hit AQTime and measured the time to do 10,000,000 square roots, 10,000,000 multiplications and 10,000,000 additions.  From that I calculated normalized costs for each operation: the cost of 1 square root = 3.5 additions.  The cost of 1 multiply = 1 additions.  So a normalized cost function for this is c(# sqrt) + c(# mult) + c(# add).  Since c(1 sqrt) = 4 * c(1 add) and c(1 mult) = 1 * c(1 add), the cost function is c(4 * sqrt +  mult + add).

Using this, c(TriangleArea) is 51.8 and c(TriangleArea1) is 37.4.  The ratio of these two is roughly 1.41.  So in this case, I have and estimate that I can use that is about 94% accurate in this particular case.  The error is no doubt due to overhead in the various test rig for the primitive operations (which quite frankly should be high) as well as computational glue in the area routines (argument pushing, calls, tests).  I tried putting 10 of each primitive in my test rig and got coefficients of 16, 1, and 1, which gives me a ratio of 1.56, again 94% accurate (but this time an overshoot).

While this certainly requires a more in-depth analysis, it appears to be a fairly decent way to get a quick cost estimate of number crunching without needing to pull out a profiler.

Published Friday, March 13, 2009 12:05 PM by Steve Hawley
Filed under: , ,