Transparent and transparency revisited

I am currently reviewing the support for transparent materials that I put into the OpenSceneGraph Collada plugin about a year ago. I realise that I need to do some more work on it to make alpha blended texture maps work. I have some questions I need answering if possible. So here goes.

  1. Am I correct in assuming that the default for the opaque attribute of the transparent element at version 1.4.1 is A_ONE (an alpha value of 1 is opaque) and at the 1.5.1 the default has changed to A_ZERO (an alpha value of 0 is opaque)?

  2. YOU CAN IGNORE THIS QUESTION FOR THE TIME BEING, I NEED TO REPHRASE IT!!! I was assuming that the colour referred to in the specification of the transparent element was the colour supplied by that element. But some implementations(e.g. Sketchup) do not seem to produce Collada documents that agree with that. Can someone confirm the source of the colour in the following circumstances.
    i. The transparent element present and specifiying a colour and no other colours or texture elements present.
    ii. The transparent element present and specifiying a colour, but with, for instance a diffuse colour or texture element also present.
    iii. The transparent element present and specifiying a texture and no other colours or texture elements present.
    iv. The transparent element present and specifiying a texture, but with, for instance a diffuse colour or texture element also present.
    v. The transparent element not present, but the transparency element present and say, a diffuse colour element present.

  3. Other posts in this forum say that in a situation where both a diffuse texture and a transparent texture are present some sort of multitexturing should be performed. But taking the phong technique as an example no specification of how this should be performed is given. The phong equation given in the specification does not have a term for the refracted colour. Can any one give me any hints how this should be done in OpenGL terms.

Help!

Roger

The default is A_ONE for both versions according to the specifications. See <fx_common_transparent_type> in the schema.

Please read specification chapter 7 section “Rendering: Determining Transparency” for details. That should help you frame your question.

It’s possible that OpenGL fix-function pipeline can only approximate some content that utilizes all the elements. You’re right that the phong (and blinn) equations in the spec don’t include the additional reflective term that may be present. I’ll submit a bug for that into bugzilla. Stay tuned.

Marcus,

Thank you for your reply. I aplogoise for not having worked out how to qoute the original message in replies yet!

My question about the default value for opaque was prompted by the following text in the 1.5.0 release notes.

Resolves report K-622 (Spec: K-3118).
The <transparent> element’s opaque attribute now allows, in addition to A_ONE and RGB_ZERO, the
following values:
• A_ZERO (the default): Takes the transparency information from the color’s alpha channel, where the
value 0.0 is opaque.

I went on to check the 1.5.0 specification and both the A_ values are shown as default in there!

However as you say it is A_ONE in 1.5.0 schema document, thank goodness. I thought I was going to have to make all my code for setting up the transparent/transparency defaults version dependent. I wonder if Google have fixed their transparency code in the latest version of Sketchup? They still write dae documents with a default for opaque (A_ONE) and a value in transparency of 0 for opaque in version 6.

I will watch out for you Bugzilla entry regarding the phong equations, thanks.

If I still have your attention, maybe I can prevail on you to answer this question. If I draw a simple surface in Sketchup and attach a texture map to it that has an alpha channel (e.g. a png image). Then in Sketchup this is displayed as expected with the zero alpha portions of the image shown as transparent. If I ask Sketchup to export this to dae it writes a technique looking like this.


            <technique sid="COMMON">
               <phong>
                  <emission>
                     <color>0.000000 0.000000 0.000000 1</color>
                  </emission>
                  <ambient>
                     <color>0.000000 0.000000 0.000000 1</color>
                  </ambient>
                  <diffuse>
                     <texture texture="material1-image-sampler" texcoord="UVSET0"/>
                  </diffuse>
                  <specular>
                     <color>0.330000 0.330000 0.330000 1</color>
                  </specular>
                  <shininess>
                     <float>20.000000</float>
                  </shininess>
                  <reflectivity>
                     <float>0.100000</float>
                  </reflectivity>
                  <transparent>
                     <color>1 1 1 1</color>
                  </transparent>
                  <transparency>
                     <float>0.000000</float>
                  </transparency>
               </phong>
            </technique>

I think this will result in a totally transparent material which ignores any alpha information from the texture. Firstly am I correct, and secondly how should the technique be written so that it renders using the alpha map from the material. My guess is something like this


            <technique sid="COMMON">
               <phong>
                  <emission>
                     <color>0.000000 0.000000 0.000000 1</color>
                  </emission>
                  <ambient>
                     <color>0.000000 0.000000 0.000000 1</color>
                  </ambient>
                  <diffuse>
                     <color>0.000000 0.000000 0.000000 1</color>
                  </diffuse>
                  <specular>
                     <color>0.330000 0.330000 0.330000 1</color>
                  </specular>
                  <shininess>
                     <float>20.000000</float>
                  </shininess>
                  <reflectivity>
                     <float>0.100000</float>
                  </reflectivity>
                  <transparent>
                      <texture texture="material1-image-sampler" texcoord="UVSET0"/>
                  </transparent>
                  <transparency>
                     <float>1.000000</float>
                  </transparency>
               </phong>
            </technique>

Roger

Just press the QUOTE button and start writing.

Definitely a bug in the spec and release notes then. I’m guessing a copy&paste error. Thanks.

The schema is authoritative too.

I will let Google know of the issue you’ve brought up.

Right that would be A_ONE mode in COLLADA.

Yes I agree since the transparency factor is 0 then the material’s contribution is fully transparent.

Using the <phong> shader, calculate the material color (mat) using the terms (elements) given (emission, ambient, diffuse, specular, reflective). that produces the “mat” color (e.g. RGBA) used in the transparency calculation in chapter 7 of the spec.

In this case the <diffuse> is a (presumably) RGBA texture and so the only change needed to fix the SketchUp export is to change <transparency> to 1.0.

Note that <transparent> and <transparency> (and <reflective> and <reflectivity>) are post operations (i.e. layers on top) in the rendering calculations (again Chapter 7 for transparent).

I think I have the hang of this quoting thing now!

Once again thanks for the reply, I think it takes us to the heart of the problem I am facing in implementing this in the OpenSceneGraph importer.

The OSG format is essential a ‘thin’ abstraction of OpenGL. For performance reasons I try wherever possible not to introduce programmable shaders into the imported scenegraph. The spec says that I should use the values from <transparency> and <transparent>.a for the rgb blending equations. It does not mention the use of a mat.a term anywhere in the rgb blending process. In OpenGL terms I interpret this as using a source blend factor of ONE_MINUS_CONSTANT_ALPHA and I copy the <transparency> * <transparent>.a value into the alpha value of glBlendColor (the constant colour). This will of course ignore any alpha value from the material and with default values produce an opaque result (which is what people have been complaining about!). So as far as I can see I either have to have some way of determining from the <technique> which of the following two versions of the Collada transparency equations to use:

i. The one from the spec.
ii. One modified as follows:-
result.r = fb.r * (1.0f - mat.a) + mat.r * mat.a

both of which can be achieved without using shaders. Or use a much more complex version of the equations which incorporates the mat.a factor into the rgb blending along the with the transparent.a and tranparency factors.

This all seems so complex that I think I must have made a stupid mistake somewhere.

I cannot help feeling that the ability to specify a texture in the transparent field instead of a color plays into this somewhere. If the same RGBA material were specified for both <diffuse> and <transparent> then in A_ONE opaque mode and provided that <transparency> was 1 the equations from the spec would produce the desired result. However this would mean that Google (and others) would be seriously broken.

Any further guidance will be gratefully received. I feel like I am going daft (Northern English slang for insane) over this.

Roger

Yep I’m well familiar with Robert and Don’s work and was part of the R&D team at Sony Computer Entertainment that contributed the COLLADA plug-in to OSG. :slight_smile:

Sure it does and you talk as-if having seen it (page 249) in your reply lol. OpenGL fixed-function pipeline can only approximate a transparent layer on top of a (transparent) phong surface. Use the equations in Chapter 7 Rendering as a guide to your approximation.

Simplify things to what I described previously (again page 249). You have a framebuffer of color, possibly a vertex color (w/ alpha), a COLLADA material (e.g. <phong>) that contributes a ‘mat’ (i.e. OpenGL material) color (w/ RGBA channels for emission, ambient, diffuse, specular), a transparent “layer” color (and scalar), and a reflective “layer” color (and scalar).

I have not seen SketchUp export content as you just described. They export a texture in the <diffuse> channel only. Google has confirmed to me in email that they are aware of this bug in their exporter and they characterize it as follows:

Hope that helps.

I am well aware of that. I was just putting the background in for anyone who was not. I am just someone who has spent some time supplying patches :slight_smile: .

I realise that you are being very patient with your replies to my questions and that that patience may be wearing thin lol. However I do not understand your reply. I have read page 249 and its antecedents going back to revision B of the 1.4.1 release notes many times. I probably need to make it clear that I use the phrase “rgb blending process” to refer to the three equations which provide the red, green, and blue values to the result variable, not to the the fourth equation that provides the alpha value. The mat.a term does not appear anywhere in those first three equations. If I use them as guidance as you suggest, then whenever transparency processing has been activated by the presence of either transparent or transparency in the technique, I should ignore the materials alpha value when calculating the red green and blue values of the resulting colour, whether or not I am doing this for OpenGL or any other renderer. Or have I once again totally misread what you are saying.

Thanks,

Roger

No worries Roger, I have lots of patience! Although you have managed to confused me wrt what’s on page 249 so I conclude we must be looking at different versions of the document or something.

From the 1.5.0 spec, on page 249 there is the equation for A_ONE mode with the “mat.a” term (highlighted in red):

In A_ONE opaque mode:

result.r = fb.r * (1.0f - transparent.a * transparency) + mat.r *
(transparent.a * transparency)
result.g = fb.g * (1.0f - transparent.a * transparency) + mat.g *
(transparent.a * transparency)
result.b = fb.b * (1.0f - transparent.a * transparency) + mat.b *
(transparent.a * transparency)
result.a = fb.a * (1.0f - transparent.a * transparency) + mat.a *
(transparent.a * transparency)

That is the material alpha that I’m talking about that is the result of the surface material color, e.g. <phong> surface color layer excluding reflective and transparent layers.

For “mat” plug in the values from e.g. <phong><diffuse> etc. and for “transparent” plug in the values from <phong><transparent> etc…

Does that help?

Mark,

You have highlighted my problem exactly! Let’s say that I have an image with an alpha channel which denotes areas of the image that should be transparent and I want this image to be used in a material to be rendered with alpha blending. To turn on alpha blending the spec says I have to include a transparent or a transparency term in the technique. So lets say I include both with default values as follows

<transparent> <color> 1.0 1.0 1.0 1.0 </color> </transparent>
<transparency> <float> 1.0 </float> </transparency>

and I put a sampler for my image in the <diffuse> term.

The equations then work out like this

result.rgb = fb.rgb * (1.0f - 1.0f * 1.0f) + mat.rgb *
(1.0f * 1.0f)
result.a = fb.a * (1.0f - 1.0f * 1.0f) + mat.a *
(1.0f * 1.0f)

Which simplifies to

result.rgb = mat.rgb
result.a = mat.a

i.e. The visible rgb rendering ignores any alpha value from the image which is rendered opaque.

OK so lets try another approach. Say this time I put the image sampler in the transparent term and let the diffuse color default.

result.rgb = fb.rgb * (1.0f - transparent.a * 1.0f) + mat.rgb *
(transparent.a * 1.0f)
result.a = fb.a * (1.0f - transparent.a * 1.0f) + mat.a *
(transparent.a * 1.0f)

which simplifies to

result.rgb = fb.rgb * (1.0f - transparent.a) + mat.rgb * transparent.a
result.a = fb.a * (1.0f - transparent.a) + mat.a * transparent.a

This is a fairly standard alpha blending equation with alpha values taken from the image alpha channel. But the visible result will be output from the phong equation (diffuse ambient etc…) and none of the rgb information from the image will be visible. I cannot remember what the defaults are in this case but I suspect the result will be black modulated by the alpha map. The result alpha value will also be incorrect.

The nearest I can get to the desired result would appear to be to specify the same image sampler in both the diffuse and transparent terms. This gives the following equations (same as before).

result.rgb = fb.rgb * (1.0f - transparent.a) + mat.rgb * transparent.a
result.a = fb.a * (1.0f - transparent.a) + mat.a * transparent.a

Because transparent.a and mat.a are the same (phong notwithstanding) this simplifies to.

result.rgb = fb.rgb * (1.0f - mat.a) + mat.rgb * mat.a
result.a = fb.a * (1.0f - mat.a) + mat.a * mat.a

Which is about what is needed apart from an erroneous alpha value in the result.

I have looked at this time and again ever since I first put the transparency code into the OSG dae plugin last year and still cannot make sense of it.

If I take as an example how images with an alpha channel used as a material are handled in Google Earth then it would appear that whenever a material is encountrered that has an image sampler in the diffuse term and that image has an alpha channel then blending is turned on using the image alpha channel as the source of the blending information.

Also looking at the ColladaLoader example from the Collada sourceforge site, I find the following code

// if diffuse texture has a alfa channel or opacity is less than 1, this will return true
bool CCLMaterial::IsTransparent() {

The image alpha channel is always used as the blending factor source.

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

So it looks a number of implementations have adopted their own interpretations of this part of the spec.

Help.

Roger

Hmm… the spec says that “If either <transparent> or <transparency> exists then transparency rendering is activated”. Probably “activated” is a bad choice of words since activation/deactivation of a run-time mode is not the point of that section, rather the blending equations are.

Pardon me but mat.a includes the diffuse texture alpha from the <phong> shader. It’s not ignored.

Hope that clears things up (pun intended ;-))

[quote=“marcus”]

Hmm… the spec says that “If either <transparent> or <transparency> exists then transparency rendering is activated”. Probably “activated” is a bad choice of words since activation/deactivation of a run-time mode is not the point of that section, rather the blending equations are.

Pardon me but mat.a includes the diffuse texture alpha from the <phong> shader. It’s not ignored.

Hope that clears things up (pun intended ;-))[/quote]

Mark,

:oops: Groan…the only thing I can think of is that I am misinterpreting what the result variable represents.

I had assumed that it was what was to be written back into the frame buffer as the end result of both the fragment shading and alpha blending process and that in OpenGL terms I should set up both the fragment shader and alpha blender to acheive this value of result in the frame buffer. Am I incorrect in this? Is result solely the output of the fragment shading process to which a subsequent alpha blending process will be applied? Is it then this subsequent process which takes the value from result.a and uses it to blend the values from result.rgb into the frame buffer according to some other set of equations implied but not defined in the spec?

That is the only way I can see at the moment that result.a(mat.a) could be used to blend the rgb values from result into the frame buffer in the way I would expect an image with a transparency map encoded in its alpha channel to be handled.

If this is the case then to save yet another exchange of messages I will ask what to me is the obvious question. What is the subsequent set of alpha blending equations that is implied?

Thanks for your patience. I think I am getting a bit obsessive about this now and maybe I should leave it for someone else to sort out!

Roger

… of the rendering calculation (blending) described up to that point.

Firstly, this is a great conversation. Thanks for sharing your thoughts as you have highlighted spec bugs and areas that need clarification as we reach a consensus. :slight_smile:

Okay so, COLLADA <profile_COMMON> is not describing OpenGL operations so your position statement is too uhm pedantic. Yes you are trying to map it to OpenGL fixed-function pipeline and yes that is subject to interpretation. We are exploring how best to do that…

Yes I think we had established that with a spec bug against <transparent> and <reflective> “layers” within the <phong> and <blinn> shaders. OpenGL doesn’t handle those extra two (software renderer) layers right?

So you can ignore those layers (like the ColladaLoader does iirc) in your approximation or as (I think) we have been doing… figure out how the transparent part fits in the OpenGL context. Let’s back up and review the set of assertions and see where we are still diverging:

[list=a]
[li] The shader (e.g. <phong>) “surface layer” calculation yields the “mat” values. This does include alpha values from e.g. <diffuse>.[/:m:2f205hme][/li][li] The shader “transparent layer” calculation yields the “result” values, blending the surface and transparent layers. The <transparent> colors are not included in “mat”.[/:m:2f205hme][/li][li] Ignoring <reflective> for now.[/:m:2f205hme][/li][li] Making “result” (pg 249) the final value of interest for OpenGL fix-function.[/:m:2f205hme][/list]
[/li]I think then we can agree on a good combination of glBlendFunc and/or glTexEnv for your implementation.

Mark,

It is the end of the day here now so only have time for a quick reply. I think we are now closing in on a solution. I will need to consider your response more closely tomorrow and respond more fully then. I fully realise that Collada is a specification for the exchange of descriptions of 3d digital assets and as a specification should as far as possible be implementation neutral. What follows are some of my ramblings about syntax, semantics, and linguistic exchanges.

What I have always struggled with is that what the specification describes in detail is mostly the syntax of the dae information interchange and not the semantics of the information that is being exchanged. The developers of the specification, yourself included, must have reached a consensus on a common abstract semantic model. This model may only ever have existed in the shared experience of working group and probably could never fully be documented. But enough of that semantic model must be communicated in the specification in order for implementors such as myself to understand how to translate the information conveyed in the dae exchange into a concrete implementation.

A large number of the implementers of dae readers and writers will be people using dae to exchange content between content creation tools (max, maya, blender, etc). These implementors will probably be translating between the dae semantic model and their own semantic model for describing 3d assets. A smaller number of implementors such as as myself will be working on scene graph importers/exporters which will be translating to and from a much more restrictive 3d rendering pipeline such as OpenGL.

I would suggest that the point we all converge is at “how things look”. So maybe a description of the abstract “Collada rendering pipeline” for want of a better term is what is needed.

Can I suggest that we use a standard example for any future discussions on blending we have. Let us assume for a start (more complex scenarios can come later) that the frame buffer contains solid opaque encoded RGBA like this 0.0 0.0 1.0 1.0, that I have an image of a green and black chequerboard where green is opaque and encoded like this 0.0 1.0 0.0 1.0 and the black is transparent encoded like this 0.0 .0 0.0 0.0. I want to use this image as a material and what I want to see eventually in the frame buffer is a green and blue chequerboard.

Just one final point and an OpenGL one I am afraid. As you may be aware standard OpenGL alpha blending is performed in the pipeline after the pieces of fixed functionality that can be replaced by a programmable shader. You can mimic it in a programmable shader but then it may well be applied unnecessarily to fragments that would be discarded by one of the fragment logical tests that occur after the programmable shader has run but before alpha blending would normally be performed. So there are good reasons to use the fixed alpha blending functions if one can.

This is longer than I intended. Time I was not here!

Roger

Mark,

I have had time to sleep on this now. Although I admit I spent some time awake going over it! I will respond to your points in what I hope is a more logical order and not the order they appear in the reply (I realise that that order was determined by my original message). To avoid confusion with the result variable we have been talking about I am going to use the term outcome to denote the eventual visual result of whatever rendering process is used to view the model, be this a rasterizer or a ray tracer or some other process.

Yes I think we had established that with a spec bug against <transparent> and <reflective> “layers” within the <phong> and <blinn> shaders. OpenGL doesn’t handle those extra two (software renderer) layers right?
[/quote]
I think you are answering yes to my question here. I think this is the moot point. If you are actually answering no then you can ignore most of the rest of this message and skip to the bottom. As an aside, I can only access the public bugzilla and the bug that I think you are referring to only contains a reference to a bug in the private bugzilla so I cannot see what is in it. But putting that aside. If you are answering yes then that leads me to two ancillary questions/observations.

i. What are the equations for the subsquent alpha blending process, do the values of the transparent and transparency elements from the technique effect them, especially the value of opaque. These equations need to be specified if implementors are to map the abstract collada rendering model into their own rendering model.

ii. The “subsequent blending process” I refer to will determine how the shaded (with “result”) mesh is blended with the current contents of “outcome” (in my case the frame buffer). In view of this and the following two statements.

[quote=“marcus”]

  1. [li] The shader (e.g. <phong>) “surface layer” calculation yields the “mat” values. This does include alpha values from e.g. <diffuse>.[/:m:a9cg41ct][] The shader “transparent layer” calculation yields the “result” values, blending the surface and transparent layers. The <transparent> colors are not included in “mat”.[/*:m:a9cg41ct]

Then I am slightly surprised by the inclusion of a fb(framebuffer) variable in the “transparent layer” equations. Your answer to this could of course be, “That is the way we want it to be”. But that would make it difficult from my point of view to map onto OpenGL fixed functionality.

Your statement here does not appear consistent with a yes answer to my “subsequent alpha blending process” question.

a. Yes agreed.
b. Yes agreed, but subject to my comments about the inclusion of the frame buffer in the equations.
c. Yes agreed.
d. No. This is not consistent with your yes answer above. I would say this makes result the input to a subsequent blending process which determines the value of “outcome”. In OpenGL terms this would be the fixed functionality alpha blending process which takes the current contents of the frame buffer, the shaded incoming fragment (result), and blends them back into the frame buffer.

To help us proceed could you bear with me and please tell me how you would write a collada phong technique that would produce the “outcome” I described for my test model in my previous post.

Roger

Mark,

I hope my last set of replies did not put you off completely. I have been looking at the documentation for 3ds max and Maya and trying to get an understanding of things from a content creators point of view. The more I look at that documentation the more I think that the equations in the spec are a little misleading. I think they need to be split into two. Using the excellent diagram (Figure 5.5) on page 94 of your book, I would suggest that the first set should describe how the various terms are used in the “Fragment Shader” phase and the second set should describe how they are used (or effect) the subsequent “Output Merger” phase. That would also help my understanding about how these thing should work in profiles other than the common profile, especially where those profiles permit multiple techniques per effect. I would also really appreciate any comments you have on my previous posts.

Roger

Hi Roger, thanks for sharing your thoughts.

COLLADA carries information along a content pipeline from source (e.g. DCC) to sink (e.g. game engine data) in an ideally policy free manner. We want to transport the data and meta data as neutrally as possible without dictating how it is used by tools in that pipeline. This has mostly been accomplished other then in the vendor specific effects profiles (e.g. GLSL) that actually can define a specific implementation’s rendering configuration.

COLLADA strives not to say how things look to that degree. It’s ok for you to take a visual scene and render it how ever you like for your use-case, interpreting as much of the information as you want to process. A tool that exports COLLADA should include enough information so that it’s own semantic model can be conveyed to subsequent tools (including re-importing). COLLADA is supposed to be flexible and extensible enough to support this model of semantics without ownership of them.

I promise to return to this thread soon to continue the conversation. :slight_smile:

Mark,

Thanks for your reply. I realise that you have many things to do other than responding to my ramblings :slight_smile: . I look forward to picking up this conversation in due course. I have actually enjoyed the break, I was starting to wake up in the night thinking about this!

Roger

I think this example needs to be restated in terms of geometry and materials to fit into the context of this thread. Otherwise this could be considered a full-screen effect and that is something else altogether.

For example, as geometry, consider two full screen quads that each have a material that is a <constant> shader with <emission><color> 0.0 0.0 1.0 1.0 </color></emission> and <emission><texture texture=“green_checkerboard.png texcoord=”#my_texcords" /></emission> respectively, and drawn in that order.

Okay.

Yes I think we had established that with a spec bug against <transparent> and <reflective> “layers” within the <phong> and <blinn> shaders. OpenGL doesn’t handle those extra two (software renderer) layers right?
[/quote]
I think you are answering yes to my question here. I think this is the moot point. If you are actually answering no then you can ignore most of the rest of this message and skip to the bottom.
[/quote]
I was answering ‘yes’ with caveat because you asked “solely the output of the fragment shading process”. That’s fairly restrictive and implementation centric.

What I did want to convey is that, to me at least and subject to concurrence, we have been discussing: composition of visual layers, COLLADA’s <profile_COMMON> data model for that, and a ultimately a mapping to OpenGL fixed-function that you can use. We are identifying areas of the COLLADA spec that needs clarification.

The COLLADA common profile describes (i.e. <constant> is simplest) three layers of appearance of geometry: surface, transparent, reflective. The spec has equations for two of the layers: surface (e.g. <constant>) and transparent (e.g. pg 249).

We’ve identified that an equation that adds in the reflective layer is missing from the spec. Missing recognition of that layer, the transparent equation calls it’s result “framebuffer” when it is actually just a “layer result” of blending surface + transparent layers. Given that many renderers do not have additional layers, this might be the final result of the framebuffer in many cases.

The common profile is fairly simple so would it be enough to say that the abstract pipeline a simple composition of F = S + T + R layers? Plus <extra> layers too?

[quote=“marcus”]

I think this example needs to be restated in terms of geometry and materials to fit into the context of this thread. Otherwise this could be considered a full-screen effect and that is something else altogether.

For example, as geometry, consider two full screen quads that each have a material that is a <constant> shader with <emission><color> 0.0 0.0 1.0 1.0 </color></emission> and <emission><texture texture=“green_checkerboard.png texcoord=”#my_texcords" /></emission> respectively, and drawn in that order.[/quote]

Agreed, but I would extend the example further to avoid confusion over what you mean by full screen. Can we say that our example is a collada <visual_scene> containing the two quad geometries and an orthographic camera arranged in such a way that the camera is looking directly at the quad with the checker board texture and the plain quad is positioned a little way behind the textured quad.

I think this results in a document simliar to this one I created using skethcup and hand edited (I did not bother to change the triangles to quads, but the idea is there I think)

<?xml version=“1.0” encoding=“utf-8”?>
<COLLADA xmlns=“COLLADA 1.4 Schema” version=“1.4.1”>
<asset>
<unit name=“meters” meter=“1.0”/>
<up_axis>Z_UP</up_axis>
</asset>
<library_images>
<image id=“material_1_1_0-image” name=“material_1_1_0-image”>
<init_from>chequerboard.png</init_from>
</image>
</library_images>
<library_materials>
<material id=“material_0_0ID” name=“material_0_0”>
<instance_effect url=“#material_0_0-effect”/>
</material>
<material id=“material_1_1_0ID” name=“material_1_1_0”>
<instance_effect url=“#material_1_1_0-effect”/>
</material>
</library_materials>
<library_effects>
<effect id=“material_0_0-effect” name=“material_0_0-effect”>
<profile_COMMON>
<technique sid=“COMMON”>
<constant>
<emission>
<color>0.000000 0.000000 1.000000 1</color>
</emission>
</constant>
</technique>
</profile_COMMON>
</effect>
<effect id=“material_1_1_0-effect” name=“material_1_1_0-effect”>
<profile_COMMON>
<newparam sid=“material_1_1_0-image-surface”>
<surface type=“2D”>
<init_from>material_1_1_0-image</init_from>
</surface>
</newparam>
<newparam sid=“material_1_1_0-image-sampler”>
<sampler2D>
<source>material_1_1_0-image-surface</source>
</sampler2D>
</newparam>
<technique sid=“COMMON”>
<constant>
<emission>
<texture texture=“material_1_1_0-image-sampler” texcoord=“UVSET0”/>
</emission>
<transparency>
<float>1.000000</float>
</transparency>
</constant>
</technique>
</profile_COMMON>
</effect>
</library_effects>
<library_geometries>
<geometry id=“mesh1-geometry” name=“mesh1-geometry”>
<mesh>
<source id=“mesh1-geometry-position”>
<float_array id=“mesh1-geometry-position-array” count=“12”>1.000000 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 1.000000 0.000000 0.000000 </float_array>
<technique_common>
<accessor source=“#mesh1-geometry-position-array” count=“4” stride=“3”>
<param name=“X” type=“float”/>
<param name=“Y” type=“float”/>
<param name=“Z” type=“float”/>
</accessor>
</technique_common>
</source>
<source id=“mesh1-geometry-normal”>
<float_array id=“mesh1-geometry-normal-array” count=“3”>-0.000000 -0.000000 1.000000 </float_array>
<technique_common>
<accessor source=“#mesh1-geometry-normal-array” count=“1” stride=“3”>
<param name=“X” type=“float”/>
<param name=“Y” type=“float”/>
<param name=“Z” type=“float”/>
</accessor>
</technique_common>
</source>
<source id=“mesh1-geometry-uv”>
<float_array id=“mesh1-geometry-uv-array” count=“8”>-39.370079 39.370079 0.000000 0.000000 0.000000 39.370079 -39.370079 0.000000 </float_array>
<technique_common>
<accessor source=“#mesh1-geometry-uv-array” count=“4” stride=“2”>
<param name=“S” type=“float”/>
<param name=“T” type=“float”/>
</accessor>
</technique_common>
</source>
<vertices id=“mesh1-geometry-vertex”>
<input semantic=“POSITION” source=“#mesh1-geometry-position”/>
</vertices>
<triangles material=“material_0_0” count=“2”>
<input semantic=“VERTEX” source=“#mesh1-geometry-vertex” offset=“0”/>
<input semantic=“NORMAL” source=“#mesh1-geometry-normal” offset=“1”/>
<input semantic=“TEXCOORD” source=“#mesh1-geometry-uv” offset=“2” set=“0”/>

0 0 0 1 0 1 2 0 2 1 0 1 0 0 0 3 0 3 </p>
</triangles>
</mesh>
</geometry>
<geometry id=“mesh2-geometry” name=“mesh2-geometry”>
<mesh>
<source id=“mesh2-geometry-position”>
<float_array id=“mesh2-geometry-position-array” count=“12”>1.000000 0.000000 1.000000 0.000000 1.000000 1.000000 0.000000 0.000000 1.000000 1.000000 1.000000 1.000000 </float_array>
<technique_common>
<accessor source=“#mesh2-geometry-position-array” count=“4” stride=“3”>
<param name=“X” type=“float”/>
<param name=“Y” type=“float”/>
<param name=“Z” type=“float”/>
</accessor>
</technique_common>
</source>
<source id=“mesh2-geometry-normal”>
<float_array id=“mesh2-geometry-normal-array” count=“3”>0.000000 0.000000 1.000000 </float_array>
<technique_common>
<accessor source=“#mesh2-geometry-normal-array” count=“1” stride=“3”>
<param name=“X” type=“float”/>
<param name=“Y” type=“float”/>
<param name=“Z” type=“float”/>
</accessor>
</technique_common>
</source>
<source id=“mesh2-geometry-uv”>
<float_array id=“mesh2-geometry-uv-array” count=“8”>1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 1.000000 </float_array>
<technique_common>
<accessor source=“#mesh2-geometry-uv-array” count=“4” stride=“2”>
<param name=“S” type=“float”/>
<param name=“T” type=“float”/>
</accessor>
</technique_common>
</source>
<vertices id=“mesh2-geometry-vertex”>
<input semantic=“POSITION” source=“#mesh2-geometry-position”/>
</vertices>
<triangles material=“material_1_1_0” count=“2”>
<input semantic=“VERTEX” source=“#mesh2-geometry-vertex” offset=“0”/>
<input semantic=“NORMAL” source=“#mesh2-geometry-normal” offset=“1”/>
<input semantic=“TEXCOORD” source=“#mesh2-geometry-uv” offset=“2” set=“0”/>

0 0 0 1 0 1 2 0 2 1 0 1 0 0 0 3 0 3 </p>
</triangles>
</mesh>
</geometry>
</library_geometries>
<library_cameras>
<camera id=“Camera-camera” name=“Camera-camera”>
<optics>
<technique_common>
<orthographic>
<xmag>1.862633</xmag>
<ymag>1.396975</ymag>
<znear>0.025400</znear>
<zfar>25.400000</zfar>
</orthographic>
</technique_common>
</optics>
</camera>
</library_cameras>
<library_visual_scenes>
<visual_scene id=“SketchUpScene” name=“SketchUpScene”>
<node id=“Model” name=“Model”>
<node id=“mesh1” name=“mesh1”>
<instance_geometry url=“#mesh1-geometry”>
<bind_material>
<technique_common>
<instance_material symbol=“material_0_0” target=“#material_0_0ID”/>
</technique_common>
</bind_material>
</instance_geometry>
</node>
<node id=“mesh2” name=“mesh2”>
<instance_geometry url=“#mesh2-geometry”>
<bind_material>
<technique_common>
<instance_material symbol=“material_1_1_0” target=“#material_1_1_0ID”>
<bind_vertex_input semantic=“UVSET0” input_semantic=“TEXCOORD” input_set=“0”/>
</instance_material>
</technique_common>
</bind_material>
</instance_geometry>
</node>
</node>
<node id=“Camera” name=“Camera”>
<matrix>
-0.000159 1.000000 0.000336 0.505289
-1.000000 -0.000159 -0.000000 0.575529
0.000000 -0.000336 1.000000 2.717795
0.000000 0.000000 0.000000 1.000000
</matrix>
<instance_camera url=“#Camera-camera”/>
</node>
</visual_scene>
</library_visual_scenes>
<scene>
<instance_visual_scene url=“#SketchUpScene”/>
</scene>
</COLLADA>

Roger