@janggut
if zoom factor is fixed(1/4X, 1/2X, 2X, 4X), what about making different sets of avatar 2 match the zooms so u won't have 2 magnify/zoom out the avatar & have something like grainy effects showing?
@Marian
However i must admit that sometimes i just don't get what he is actually talking about.
I apologize for making assumptions that lead to writing birds flying over heads. <img src="/ubbthreads/images/graemlins/biggrin.gif" alt="" />
Here is an attempt to be as simple as I can.
The T3D model of an avatar is created by the help of some utility but whatever that utility is we end up with a mathematical definition.
Let us say that the avatar is defined in a space with a coordinate origin at the centre of its pelvic bones like a marionette. Also let us assume that each avatar’s surface is described by 1000 ± 100 polygons all in all.
Each polygon is defined relative to that origin by three coordinates and each coordinate is defined by three numbers, thus each polygon is defined by a total of nine numbers. I hope this is clear enough.
Now each polygon is a two sided lamina, and to identify the outer surface a number called the normal-to-the-surface-direction is a solid angle relative to the axis originating from the point of origin. Now bear with me.
Each animation script, for example walk, run etcetera must have a corresponding list of frames. Each frame list must include 10000 numbers because each polygon of the 1000 requires 9 coordinates plus one surface normal direction. Now given the eye relative position to the world and the heading of the avatar in motion, with simple arithmetic a decision could be made regarding the visible polygons in each frame. The number of visible polygons in each frame must be less than half the total number of polygons per frame. That is why by testing 1000 numbers only we can decide the less than 4500 numbers we need to read per frame. The associated textures are list index relative and they are called by polygon index number, then by some mathematical equation for tilt we can compress the bitmap into the 2D area that represents the visible polygon.
Now read very carefully here.
If that area that represents a polygon was too small we cannot avoid graininess and bad quality.
So, how we can find out if such an area was too small or not, is by calculating the minimal vertical number of pixels the count of which corresponds to visual information of identifiable features.
Now let me demonstrate the super deluxe model first.
I shall assume zooming out to the maximum and displaying the avatar as small as possible.
Now let us examine a vertical dissection passing into eyes’ centre and nose.
A single polygon for the head top (I hope there is no less than one because zero would let us see his brains).
A single polygon for the forehead (a white space of skin colour)
A single polygon for eyebrows (a colour of hair)
A single polygon for the eyes
A single polygon for the nose
A single polygon for the moustache/ upper lip
A single polygon for the lower lip
A single polygon for the chin.
This scheme is definitely out of proportion but let us just assume that this is acceptable as the least resolution.
Therefore the head needs a minimum of 16 vertical pixels so that we may call it a head. <img src="/ubbthreads/images/graemlins/biggrin.gif" alt="" />
Dwarves and imps shall solve a lot a problems here so take them not as any benchmark. <img src="/ubbthreads/images/graemlins/smile.gif" alt="" />
Let us assume that our avatar is a gorgeous babe the head of which is one part.
Japanese people have shorter legs and longer abdomens, while African people have longer legs and shorter abdomens. Now let us take the international average which states that a head is 1/6 of the total body height excluding children of course. 16 x 6 = 96, therefore 96 pixels are absolutely needed for the worst case of a super deluxe model.
To add one pair of pixels for the head is to add 12 pixels as an increment for the whole body. Therefore the next better model is 108 pixels high with a nose two pixels long. <img src="/ubbthreads/images/graemlins/biggrin.gif" alt="" />
For having a head top that looks any better than a box lid we need two pixels.
Therefore the least acceptable model must be 120 pixels high for an adult human avatar.
Add to those, 8 pixels for anti-aliasing to blend the avatar softly with the background, similar to diffractive effects of light in nature. This brings our standard avatar’s height in pixels to a binary practical number.
This means that my original polygons must be distributed proportionally to fill a vertical scale, which is a multiple of 128 if we shall use integer arithmetic or just 128 if we shall use floating point arithmetic.
The final outcome is displaying every feature rather than discarding intermediate zeros.
This has nothing to do with zooming in or out or screen resolution or scale. It is the basic information required for the minimal full detailed adult human avatar. Putting the same information on 256 pixels doubles the polygonal displayed height and the details of texture but no new polygons are being added here.
This means that zooming in enhances the textures only but does not add polygonal details. In fact zooming in may add previously discarded polygons, which might seem as if we added new polygons but we did not do that.
My whole calculation was focussed on avoiding the discarding of polygons on zooming out.
If the adult human avatar was displayed in less than 128 pixels high there is no safeguard against loosing polygons and consequently some details, which we perceive as blurriness and/ or graininess.
If this still flies over heads I have no idea what I could say, but I shall try if I was asked to do so. <img src="/ubbthreads/images/graemlins/biggrin.gif" alt="" />
<img src="/ubbthreads/images/graemlins/wave.gif" alt="" />
[color:"yellow"]
Addendum: I have edited the post to correct some numerical glitches and to add those pictures for comparison.
[/color]
![[Linked Image]](http://www.eonet.ne.jp/~hemetis/DDFW_64.jpg)
...Height = 64 PIX...Bad
![[Linked Image]](http://www.eonet.ne.jp/~hemetis/DDFW_96.jpg)
...Height = 96 PIX...Threshold
![[Linked Image]](http://www.eonet.ne.jp/~hemetis/DDFW_128.jpg)
...Height = 128 PIX...Good