*On 19/05/2012 at 08:36, xxxxxxxx wrote:*

User Information:

Cinema 4D Version: R13

Platform: Windows ;

Language(s) : C++ ;

---------

Hi All,

I've looked through many threads here, and on the net, but I still can't get my head round this. I want to convert a point on the surface of an object to the equivalent UVW coordinate. So far I have:

* the index of the polygon (in a triangulated object)

* therefore the coordinates in object space of the vertices making up that polygon

* a point on the surface of that polygon, in object space

* the UVW struct of the UV polygon corresponding to that object polygon

So, given all that, how do I find the UVW coordinate of that surface point? I've seen references to interpolating values but how is this done? Does anyone have any ideas or references or even better a code snippet?

Any help would be much appreciated.

Thanks,

Steve

]]>*On 20/05/2012 at 09:06, xxxxxxxx wrote:*

Hi Scott,

I will post something when it's all working correctly, right now I've got some kinks to work out of it and need to integrate it into my plugin. Then I'll upload the code so you can have a look.

Steve

]]>*On 20/05/2012 at 07:40, xxxxxxxx wrote:*

Oh please, please, please post an example of sampling an object's surface to get the texture color guys.

Can you tell that I really want this?

I've gotten as far as creating barycentric coords...But never was able to get any farther with it.

I've been wanting to know how to do this for long time.

-ScottA

]]>*On 20/05/2012 at 03:02, xxxxxxxx wrote:*

Excellent - I'm glad it helped. I'm just curious though... if you're writing a shader, doesn't the interface provide you with the UVW coordinates for each pixel (texel) it needs a shading value for in the callback?

EDIT: Oh, I misread your post... you're not writing a shader, you're polling/sampling one - that makes sense now.

]]>*On 20/05/2012 at 01:53, xxxxxxxx wrote:*

And it works!

When I was working through your code, I realised that in fact you are calculating barycentric coordinates. I blew the dust off one of the 'maths for game programmers' books I've accumulated and there it was - using barycentric coords to derive a surface texture position.

So now I can use this to sample the shader, which is exactly what I wanted to do.

Thanks again Keith, that's a major headache out of the way.

Steve

]]>*On 20/05/2012 at 01:00, xxxxxxxx wrote:*

Hey Keith,

Thank you *very* much for the time you spent and for the code. It's very helpful and I'll study it (and the other thread you linked to) in detail.

The reason I need this is to sample a shader colour at that point for use in my particle emitter. What I'm trying to do is the same as in the Thinking Particles PMatterWaves node. In that, there is a parameter called 'Birth Type' which lets you insert a shader which then determines the emission of particles from the source object.

The only way I could think of to do this was to sample the shader, but for that the UV coordinates of the point to sample are needed - which is why I asked this question. But it may be I'm going about it completely the wrong way. I'm in the dark on this but I'll give your code a try and see if it works.

Cheers,

Steve

]]>*On 19/05/2012 at 13:15, xxxxxxxx wrote:*

Oh, one more thing... you should also note that - while the above code / methodology works, it "assumes" that you have the "correct" UVW triangle/values... which may not be the case where the surface/intersection point lies ON a "UV seam-split" edge/point.

I do not (yet) have a good method for identifying/dealing with those.

]]>*On 19/05/2012 at 13:07, xxxxxxxx wrote:*

BTW, the above code was pulled directly from my Morph Mill plugin, and is used in the MeshMap tag to correlate dissimilar mesh topologies (map the vertices - and copy the UVW values - from one mesh to another, with dissimilar mesh topologies).

]]>*On 19/05/2012 at 12:51, xxxxxxxx wrote:*

This first routine is the one that computes the 3 weights (for vertices a,b,c of the triangle, but stored in my own structure as w0,w1,w2)...

```
//------------------------------------------------------------------------------------------------------
//******************************************************************************************************
// ComputeTriWeights()
//******************************************************************************************************
//------------------------------------------------------------------------------------------------------
void ComputeTriWeights(triWeightNdx *pWeights, Vector ixPoint, Vector polyNorm, Vector v0, Vector v1, Vector v2)
{
//=================================================================================
// Args:
// pWeights - a structure for holding the computed weights for 3 verts of a triangle
// ixPoint - intersection point of ray-cast with triangle (point on surface of triangle)
// polyNorm - polygon (triangle) Normal
// v0, v1, v2 - 3 verts that make up the triangle
//
// For each vertex of the triangle, it's weighting value is computed as:
//
// weight = 1.0 - (ISectDist divided by EdgeDist)
//
// [...if interested, see additional notes and diagrams here:
// http://www.renderosity.com/mod/forumpro/showthread.php?thread_id=2677445&page=2 ]
//
// So far, we have an ISectDist for each vertex-->intersection (idv0..idv2).
// What we don't have yet is the distance from each vertex to the opposing edge,
// along the vector described by the line from the vertex to the intersection.
//
// Optimization: The nature of these weights is that the sum of them add up to 1.0,
// so we really only need to calculate the first 2 weights and then we can deduce
// the 3rd based on those results.
//=================================================================================
Real idist, edist;
// compute weight for v0, using v1..v2 edge
edist = Ray2EdgeDist(v0,v1,v2,polyNorm,ixPoint);
if( edist >= flt_max - 0.0000001 ) // FLT_MAX is the signal that the intersection was _at_ this vertex
{ // (or at least so close that the error-correcting code let it through)
pWeights->w0 = 1.0; pWeights->w1 = 0.0; pWeights->w2 = 0.0;
return;
}
else if( edist <= 0.0 ) // don't divide by zero (this would be a degenerate triangle)
pWeights->w0 = 0.0;
else
{
idist = point_distance(v0, ixPoint); // distance from vert of the tripoly to the intersection point
pWeights->w0 = 1.0 - (idist / edist); // solve for weight = 1.0 - (ISectDist divided by EdgeDist)
}
if( pWeights->w0 < 0.0 ) pWeights->w0 = 0.0;
// repeat the above for v1, using v2..v0 edge
edist = Ray2EdgeDist(v1,v2,v0,polyNorm,ixPoint);
if( edist >= flt_max - 0.0000001 ) // FLT_MAX is the signal that the intersection was _at_ this vertex
{ // (or at least so close that the error-correcting code let it through)
pWeights->w0 = 0.0; pWeights->w1 = 1.0; pWeights->w2 = 0.0;
return;
}
else if( edist <= 0.0 ) // don't divide by zero (this would be a degenerate triangle)
pWeights->w1 = 0.0;
else
{
idist = point_distance(v1, ixPoint); // distance from vert of the tripoly to the intersection point
pWeights->w1 = 1.0 - (idist / edist); // solve for weight = 1.0 - (ISectDist divided by EdgeDist)
}
if( pWeights->w1 < 0.0 ) pWeights->w1 = 0.0;
// wgt2 gets whatever is left over...
pWeights->w2 = 1.0 - (pWeights->w0 + pWeights->w1);
if( pWeights->w2 < 0.0 ) pWeights->w2 = 0.0;
}
```

...the next sample routine computes the distances...

```
//======================================================================================================
// Ray2EdgeDist()
//
// Note: this routine is fairly specific to the task at hand, so the name of it shouldn't be taken
// as a general problem solver.
//
// vert - one vertex of a triangle
// e1,e2 - the vertices that make up the opposing edge
// pnorm - the normal of the triangle
// isect - the point of intersection (some point within the triangle)
//
// What this routine does is compute the distance from a vertex to the opposing edge of a triangle,
// along the vector described by the line from the vertex to the intersection point that was
// calculated in the linecast_loop() code.
//
// The methodology I'll use to come up with this is as follows:
//
// 1. create a 'plane' from the edge being tested, perpendicular to the plane described
// by the intersection point and the two points of the edge .
//
// 2. calculate the intersection between a ray cast from the vertex in question (along
// the vertex-->intersection vector), with the plane created in step one, above.
//
// ...note that once we have the new plane from step #1, this is basically the same methodology
// we used back in linecast_loop() to compute the original intersection point, just twisted
// around in space to deal with the new ray direction.
//======================================================================================================
Real Ray2EdgeDist(Vector vert, Vector e1, Vector e2, Vector pnorm, Vector isect)
{
// create plane for e1..e2 edge, perpendicular to triangle
Vector evec = e2 - e1; // get edge vector
Vector perp = !(evec % pnorm); // get vector perpendicular to polygon (normalized cross product)
Real plane = perp * e1; // convert to plane (dot product)
Vector rvec = !(isect - vert); // get ray vector from vert-->intersect
Real ndota = perp * rvec;
Real ndotv = perp * vert;
if( ndota <= 0.0 ) // divide-by-zero sanity check - due to some math rounding error that
return flt_max; // was adjusted for the initial point_in_triangle test, it's possible
// that the point is at the vertex being tested, which could cause the
// ray vector to be wacky, so just return a 'maximum' distance.
return (plane - ndotv) / ndota; // compute intersection distance from vert to edge, along vector from vert-->intersect
}
```

...finally, that point_distance() routine is actually just a simple macro:

#define point_distance(a,b) Len((b) - (a))

]]>*On 19/05/2012 at 12:45, xxxxxxxx wrote:*

The answer (the one I came up with, anyway) is a bit difficult to describe, but I'll give it a shot... [someone smarter than me may have a more straight-forward solution - mine was developed for a specific purpose]

The basic idea is that you need to come up with a 'weight' for each vertex (A/B/C) of the polygon and then compute the new UV coordinates by adding up...

```
// newA (and later B, C) are the new UVW structs, oldA/B/C are the old/current UVW struct
newA.x = (oldA.x * weightA) + (oldB.x * weightB) + (oldC.x * weightC);
newA.y = (oldA.y * weightA) + (oldB.y * weightB) + (oldC.y * weightC);
```

...of course the question becomes how you compute the 'weight' value for each vertex of the triangle.

The method that I came up with is worked out and described in this thread, about half-way down the page, where you see the sample image with a black background and a triangle with some sample points on it. [NOTE: at that point in the thread, my primary goal was to determine those weight values for use with something else, but it's the same weights that can be used to derive new UVW coordinates.]

For purposes of discussion, let's just consider one of those sample points in that image - point #2, which is near the center of the triangle. This would be your "a point on the surface of that polygon, in object space". Using the a/b/c/ vertices of the triangle and that #2 point, you need to find a weighting value for the a/b/c vertices...

To do that, for each vertex, you need to determine:

**EdgeDist** = the distance from the vertex - through point #2 - to where that line intersects with the opposite edge of the triangle.

**ISectDist** = the distance from the vertex to point #2

...the weight (for **that** vertex) is then computed as:

**weight = 1.0 - (ISectDist divided by EdgeDist)**

...also note that the weights for the 3 vertices of the triangle will add up to 1.0, so you only need to calculate weights for two of them - you can do a simple subtraction for the 3rd.

So, with the above in mind, I'll post some sample code, below...

]]>