## Experience with MD2's? Your brains are needed!

If you have experience with MD2's then perhaps you could help me here. My MD2 loader was the first code I adapted to use my new mesh class/system. This new system will be much better than the old one because it uses VBO's and VertexArrays rather than plain-old immediate mode (though it can do that too ).

Now it works almost perfectly, it can draw the models fine using VBOs, VAs or immediate mode. However, it can't get the texture coordinates right in *any* of these modes. The same (awry) results are produced when rendered using all three methods. At first, I thought it might be my endian code (again, *sigh*) because I had revamped it for this project. Alas... it was not (It works fine with other model formats), so the problem is a little deeper than that.

I think part of the problem originates from the peculiar way MD2's store their texture coordinates. The method, while simple enough in itself is not very VBO friendly for these reasons:

1) You can have one set of indices for your vbo.

2) They must line up with the vertices and texture coordinates.

3) This is not true with MD2s, for two reasons:

1.a) MD2s have far more texture coordinate sets than vertices.

1.b) I have no idea why, but it's true. The sydney model (which I borrowed from irrlicht has 342 vertices and 2037 texture coordinates. Funnily enough this is 11 less than the file maximum.)

The resolution should be simple, you just line up the texture coordinates you need to line up the tex-coords you need and discard the 1695 (in this case) you don't need. ( )

How does one do this? Well, luckily the MD2 stores it's triangles like this:

Code:

`typedef struct {`

unsigned short vertex[3]; // Indices to the vertices.

unsigned short texture[3]; // Indices to the texture coordinates.

} MESH_MD2_TRIANGLE;

The indices point to arrays of structs containing various coordinates. Which means, that one does *not* have to divide them by the number of dimensions (3 for vertex, 2 for tex) to remove the dimension offset in the indices.

Now, if you would please look at exhibit b...

Code:

`// Process texture coordinates.`

// We know which slot the vertex is in (vertex in tri),

// tex coord must be made to match.

m_texCrds[mchunk.triangles[i].vertex[u] * 2] =

(float)mchunk.texCrds[mchunk.triangles[i].texture[u]].s

/ mchunk.header.skinWidth;

m_texCrds[(mchunk.triangles[i].vertex[u] * 2) + 1] =

(float)mchunk.texCrds[mchunk.triangles[i].texture[u]].t

/ mchunk.header.skinHeight;

In the above sample, 'i' counts triangles while 'u' counts the coordinates of the triangles. The '* 2' is for turning non dimension-offset arrays into dimension offset arrays. (** explained below...) In this case, 'mchunk' is the model data (loaded from file). This code grabs the index of the of the vertex, and goes to that index in the final m_texCrds array. In this slot it places the value of the s & t (x & y) values of the MD2 tex-coord that matches that vertex. The devisions by width and height translate the pixel perfect coords into OpenGL friendly coords.

Savvy?

It took me a little while to think this through before finally deciding it makes sense. Thing is... it doesn't work.

I think the problem originates from the fact that the MD2 has many more tex-coords that vertices. What I'd like to know is...

a) Why aren't the number of vertices vs. tex-coords equal?

b) How would it affect my algorithm?

c) What else might cause this?

d) Salsa.

If you know anything, or have spotted an error in my logic please call this number; 1-800-JONES-NEEDS-YOUR-HELP.

Thanks guys!

Appendix ():

** Explanation a:

non dimension-offset arrays vs. dimension offset arrays:

What I mean by this is relatively simple, here's the code:

Code:

`// dimension offset`

float verts1[num_verts * 3];

float verts2[num_verts][3];

In the first, the indices would have to be multiplied to get the actual position in the array. (OpenGL does this itself.) In the second, they would not.

There's also an MD2 converter (with GPL'ed source) in Outnumbered

akb825 Wrote:I believe what it's doing is it has multiple texture coordinates for each vertex. (aka: it textures each triangle leading to that vertex differently) The solution would be to do what I suggested in your previous thread dealing with this issue: expand vertices to fit each index, and re-compress them in an OpenGL friendly format. I suggest using a balanced BST to do it, since it will be O(n*lgn) as opposed to O(n^2), which can really add up. (when I made my obj loader, loading times for one (very large) model literally went down from over 10 minutes using a linked list to around 10 s using a tree)

I thought BST was a disease...

I think I get more of what you said in the other thread now, actually. It's the vertices that I must match up with the tex-coords, not the other way around. I'm afraid the stuff about O(n*lgn) and O(n^2) is going in through one ear and out the other. I honestly have no clue what those formulas represent, sorry.

Thanks for those links Cookie I'll try to make good use of them!

O(f(n)) means that the algorithm being talked about has a worst-case time cost of a constant factor times the function being described. (where n is the number of items being worked on) Therefore, if you used a balanced BST, your vertex-finding algorithm's run time would be proportional to n*lg(n) (where lg is log base 2). Meanwhile, a linked list would be proportional to n^2. Both have n being the number of vertices. Just for sake of example, if you had 1000 vertices, following those formulas the BST-basd algorithm would be (very roughly) 100x faster. (and it will only get better with more vertices; for example, that increases to 753x faster with 10,000 vertices)

If you don't know how to make a BST, or even a linked list, for that matter, learn. Now. If you don't, you almost certainly won't be able to make a 3D game that can run in real time. You can often get away with things like the STL, but often, such as in this case, you need more information than what the standard implementations give you. (in this case, you need either the index of an equivalent vertex already in there or the index for the vertex you just inserted) In order to do that, you need to have a custom implementation. (just whether there was an equivalent one or not)

A hash table could be used instead, such as in that ruby code, but you'd still have to find a way to hash them uniquely. (I believe that's where the scripting languages come in handy, since just about anything goes with them; using a compiled language would likely be much harder in that regard) You probably could just change all the numbers into text and put them in there, but then if you have a lot of vertices, that's going to probably double (or more) the space required over the tree. Of course, you also will have the problems of resizing the hash table... (speed-wise, at least, with all the copying etc.)

In the mean time, I wrote a little experiment to see how many duplicate texture coordinates were in the file. It's just two for loops checking every s & t versus every other s & t.

Here are some of the first results:

Code:

`S of 0 matches S of 3`

T of 0 matches T of 3

T of 0 matches T of 11

S of 0 matches S of 12

S of 0 matches S of 13

S of 0 matches S of 14

T of 0 matches T of 14

S of 0 matches S of 15

T of 0 matches T of 15

S of 0 matches S of 16

T of 0 matches T of 16

S of 0 matches S of 17

S of 0 matches S of 19

S of 0 matches S of 20

T of 0 matches T of 20

S of 0 matches S of 21

T of 0 matches T of 21

T of 0 matches T of 32

T of 0 matches T of 34

That's rather disturbing. Slot 0 alone possesses several matches within the first 40 values. Insane byte wastage! *dunn dunn duh*

akb825 Wrote:BSTs are a huge part of computer science. Something tells me you're going to have to take a data structures class before you can get into anything more advanced.

Quite possibly.

After reading about BSTs on Wikipedia and several pages I googled I understand the concept, in reality it's not super-complex. But I'm not exactly sure how it could help me.

Using the knowledge I've aquired, I'm testing myself by trying to predict the behavior of this tree. So far I'm failing.

Jones Wrote:But I'm not exactly sure how it could help me.Then you need to do more research. Essentially, it helps you by making everything faster. A balanced BST will take about lg(n) checks to find an item, where n is the number of items, as compared to a linear search which takes on average around 1/2n. This can cut your runtime down substantially. As you add more items, a solution with a balanced BST gets better and better than a linked list. (think hundreds, thousands, tens of thousands times better, even more; the more items, the bigger the difference)

That said, keep reading up on AVL trees. Red-black trees seem to be a favorite of many computer scientists since they're slightly faster on insert (not sure about delete: my professor never actually got to that when we talked about red-black trees...), but AVL trees are much, much easier to implement. You also want to make sure that your trees are as balanced as possible, otherwise it can behave more like a linked list in its search/insert/delete times.

Edit: BTW, Wikipedia has an article on AVL trees, as well.

Correct me if I'm wrong, but is it logical and correct to say that when I delete a node, that the greatest value underneath the node *lesser* to the node being deleted replaces that node? If the closest smaller value to that node has no greater value then it is used.

You know, these BST's are actually very cool.

EDIT:

Adding on to my explanation, if the node being delete has no lesser, then the closest greater is moved to its position.

Internally it'll have a big array of structs. The struct might look something like this:

Code:

`typedef struct {`

int smaller;

int item;

int bigger;

} node;

Smaller and bigger would point to other slots in the array.

OR:

The class has one node, the root. I could declare nodes outside of the class and then add them. When I do, pointers to them would be added to the nodes they'd belong to.

The second would be messier, but the first would be less flexible.

Your opinions?

Thanks.

I made up an array with 6 vertices and 6 indices, forming some shapes. The shapes shared two vertices, so only four were necessary. I admit this is a little small scale, I plan to generate some massive arrays semi-randomly for the tool to try.

Using simple ratio-math I scaled up the results, here is what *theoretically* should happen:

Quote:Vert-Floats / Indices / Time in microseconds (1/1000000 of a second)

18 = 6 = 5

400 = 133 = 110

10000 = 3333 = 2777

80000 = 26666 = 22221

For your math, you're going to have to deal with a much larger set of indices to reliably scale it up. With such a small sample, what you have really isn't accurate. You're also assuming that it scales linearly: this is not necessarily the case. Since you need to search through the array each and every time you go to another index, you're essentially running proportional to the square of the numbers of the indices. This means your estimated times aren't even proportional to the real times you'll use.