Offset/Scale secondary UVs from accessors for lightmaps?

Hello!,

I’m working on a custom exporter for Unity3D to gltf, I’ve finally made my own exporter with secondary uvs for lightmaps :slight_smile: (Yes, I know so far material does not support lightmap textures, but im planning to add lightmap texture later with code in three js)

The thing is, inside Unity3D, every base mesh may have an “initial” set of secondary uvs2, when baking lights this initial uvs2 get multiplied by an offset saved inside unity in the meshRenderer (Renderer.lightmapScaleOffset). This will make those initial UVs be scaled and multiplied and in the process a “new set uf UVs” are created for lightmaps

Working in the exporter, Ive noticed there is an extension called “KHR_Texture_Transform”, but this offset/scale is applied to the whole texture/material, and I’m wondering if there is an extension that only applies to the UVs themselves, for example:

Right now this is an example of KHR_Texture_Transform

{
  "materials": [{
    "emissiveTexture": {
      "index": 0,
      "extensions": {
        "KHR_texture_transform": {
          "offset": [0, 1],
          "rotation": 1.57079632679,
          "scale": [0.5, 0.5]
        }
      }
    }
  }]
}

and Im looking for something like this

"meshes": [{
            "primitives": [{
                "attributes": {
                    "POSITION": 1,
                    "NORMAL": 2,
                    "TEXCOORD_0": 3,
                    "TEXCOORD_1": 4
                },
                "indices": 0,
                "material": 0
		"extensions": {			// (Im not sure if its even possible)
        	    "EXT_primitives_uvtransform": {
          		"offset": [0, .25],
          		"scale": [0.5, 0.5]
        	}
      	    }
            }]
        },...

So as for now, the process im taking to save lightmap uvs is:
Exporting:
1- Get base UVs2
2- Multiply UVs, and Offset them per static exported mesh
3- Save modified UVs to binary

Reading:
1- Read UVs2
2- Apply UVs2

In that process, binary gets alot bigger, because lightmap uvs have to be saved over and over again for each static mesh.

The approach I want to reach is:
Exporting:
1-Get base UVs2
2-Save to binary in accessor

Reading:
1- Read UVs2
2- (extension) apply offset/scale only to UVs2
3- Apply UVs2

In this process, instances of static meshes, will reference to the same accessor, but will give an offset when accessing the gltf file, and binary file size will be reduced drastically in large static scenes.

So is this possible?, if it is, any advice on where to start or where to refer to start working on an extension like this :slight_smile: ?, thanks in Advance!

The glTF way of transforming UVs is to use KHR_texture_transform, applied to the texture slot rather than the UV attribute, as you’ve noticed.

If you need a custom workflow for your lightmap setup here, I’d recommend just using glTF’s “extras” storage to keep track of the offsets associated with these UVs. For example:

{
"meshes": [{
  "primitives": [{
    "indices": 0,
    "attributes": {"POSITION": 1, "TEXCOORD_0": 2, "TEXCOORD_1": 3},
    "extras": {
      "offset": [0, .25],
      "scale": [.5, .5]
    }
  }]
}]
//...
}

In three.js, the extras will automatically be assigned to your loaded objects as mesh.userData, and you can apply it to your lightmap texture then. You might need to try putting the extras on the mesh instead of the primitive, depending on which three.js maps to its THREE.Mesh type.

thank you @donmccurdy!

I will give it a try that way, I will upload the results if I manage to get it done correctly, thank you! :smile:

1 Like