I’ve begun to second guess myself on this one as of late. The way I used to do it is to translate it to the origin, do the rotation, and translate it back. It’s certainly a fail-safe method because it always works. But I believe all you need to do is reverse the order of operations in your multiplication.
WorldMatrix *= glm::rotate(glm::mat4(), glm::radians<float>(1), glm::vec3(0.0f, 1.0f, 0.0f))
outside of the shader in the C++ code. I hate that GLM makes me do it this way. In English, that’s
Object’s matrix = Object’s matrix times a rotation matrix which is built by taking an empty matrix and applying a 1 radian rotation around the Y axis.
I apply that every frame and so it’s a continuous rotation.
But basically, I believe that is World = World * Rotation. If that’s not working, switch it to World = Rotation * World. Since matrix multiplication is not commutative, you’ll get a different result.
In your case, you appear to be attempting to move to the origin. I assume that’s why it translates and then does basically an inverse of the translate after the rotation. But it’s not based on the actual position (or maybe it does happen to be with the centeroid there, as you mentioned).
To pull off that method, you need to translate back to the origin, which is a negative of the position itself. At 10,-12, 4 you need to translate -10,+12,-4 to get to the origin and then 10,-12,4 to get back to where you were after the rotation.
But in this case, I’m not sure the translation is even necessary.
I notice that you are not using that Projection matrix in the uniform. So, you’re in normalized device coordinates, which is a bit weird. But the main thing is that you don’t really have a Z value there because you are in 2D.
I have one example that does a rotation around the Z axis like this where PositionOffset is a parameter:
uniform vec2 PositionOffset;
gl_Position = vec4(Pos.x + PositionOffset.x, Pos.y + PositionOffset.y, Pos.z, 1.0f);
Anyway, the long and short of it is: try vec4(in_position,1) times rotationZ instead of the rotationZ times vec4(in_position,1). Reversing the order should make the difference of revolving around the local axis or the global axis.
And here’s my shader which expects all that matrix algebra to be done in the C++ code using GLM and submitted as a World matrix. I will use this shader for everything (except maybe something like a 2D HUD or Post Process Effects shader). Anything else I want will just be a modification of this shader. It’s highly versatile and covers all the basics. The entire Visual Studio project file is available on my website if you want to see the C++ code that calls it. There’s also a video clip there where you can see what the code does if you don’t want to download it and run it. Part of what I’m doing is rotating a triangle, but it also has a cube that is rotated and moved using the keyboard and a camera that is moved by the keyboard or game pad. I’m hoping to get time to clean up the code just a bit, comment it, and do a video explaining it at some point, but have just posted the code in the meantime.
#version 450 core
layout (location = 0) in vec3 Pos;
layout (location = 1) in vec2 UV;
layout (location = 2) in vec3 Normal;
layout (location = 3) in vec4 Color;
uniform mat4 WorldMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;
smooth out vec2 TextureCoordinates;
smooth out vec3 VertexNormal;
smooth out vec4 RGBAColor;
smooth out vec4 PositionRelativeToCamera;
out vec3 WorldSpacePosition;
gl_Position = WorldMatrix * vec4(Pos, 1.0f); //Apply object's world matrix.
WorldSpacePosition = gl_Position.xyz; //Save the position of the vertex in the 3D world just calculated. Convert to vec3 because it will be used with other vec3's.
gl_Position = ViewMatrix * gl_Position; //Apply the view matrix for the camera.
PositionRelativeToCamera = gl_Position;
gl_Position = ProjectionMatrix * gl_Position; //Apply the Projection Matrix to project it on to a 2D plane.
TextureCoordinates = UV; //Pass through the texture coordinates to the fragment shader.
VertexNormal = mat3(WorldMatrix) * Normal; //Rotate the normal according to how the model is oriented in the 3D world.
RGBAColor = Color; //Pass through the color to the fragment shader.
#version 450 core
in vec2 TextureCoordinates;
in vec3 VertexNormal;
in vec4 RGBAColor;
in float FogFactor;
in vec4 PositionRelativeToCamera;
in vec3 WorldSpacePosition;
layout (location = 0) out vec4 OutputColor;
uniform vec4 AmbientLightColor;
uniform vec3 DiffuseLightDirection;
uniform vec4 DiffuseLightColor;
uniform vec3 CameraPosition;
uniform float SpecularPower;
uniform vec4 FogColor;
uniform float FogStartDistance;
uniform float FogMaxDistance;
uniform bool UseTexture;
uniform sampler2D Texture0;
vec4 BlinnSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
HalfwayNormal = normalize(LightDirection + CameraDirection);
SpecularHighlightAmount = pow(clamp(dot(PixelNormal, HalfwayNormal), 0.0, 1.0), SpecularPower);
SpecularLight = SpecularHighlightAmount * LightColor;
vec4 PhongSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
ReflectedLightDirection = 2.0 * PixelNormal * clamp(dot(PixelNormal, LightDirection), 0.0, 1.0) - LightDirection;
SpecularHighlightAmount = pow(clamp(dot(ReflectedLightDirection, CameraDirection), 0.0, 1.0), SpecularPower);
SpecularLight = SpecularHighlightAmount * LightColor;
vec3 CameraDirection; //Float3 because the w component really doesn't belong in a 3D vector normal.
InputColor = texture(Texture0, TextureCoordinates);
InputColor = RGBAColor; // vec4(0.0, 0.0, 0.0, 1.0);
LightDirection = -normalize(DiffuseLightDirection); //Normal must face into the light, rather than WITH the light to be lit up.
DiffuseLightPercentage = max(dot(VertexNormal, LightDirection), 0.0); //Percentage is based on angle between the direction of light and the vertex's normal.
DiffuseLight = clamp((DiffuseLightColor * InputColor) * DiffuseLightPercentage, 0.0, 1.0); //Apply only the percentage of the diffuse color. Saturate clamps output between 0.0 and 1.0.
CameraDirection = normalize(CameraPosition - WorldSpacePosition); //Create a normal that points in the direction from the pixel to the camera.
if (DiffuseLightPercentage == 0.0f)
SpecularColor = vec4(0.0f, 0.0f, 0.0f, 1.0f);
//SpecularColor = BlinnSpecular(LightDirection, DiffuseLightColor, normalize(VertexNormal), CameraDirection, SpecularPower);
SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor, normalize(VertexNormal), CameraDirection, SpecularPower);
float FogDensity = 0.1f;
float LOG2 = 1.442695f;
float FogFactor = exp2(-FogDensity * FogDensity * PositionRelativeToCamera.z * PositionRelativeToCamera.z * LOG2);
FogFactor = 1 - FogFactor;
OutputColor = RGBAColor * (AmbientLightColor * InputColor) + DiffuseLight + SpecularColor;
OutputColor = mix (OutputColor, FogColor, FogFactor);
//OutputColor = vec4(0.0f, 0.5f, 0.0f, 1.0f);
That shader code may appear to be really complicated if you are new to shaders, but I have a YouTube video series that pretty much explains every line of it. Although, the video series is actually HLSL and not GLSL. This is the GLSL version of the code in my HLSL video series. I tried to keep the variable names pretty much the same. And so you should be able to walk through it with this GLSL version and understand it. The only difference in this shader from the one in the series is this one is slightly more advanced because it does fog. I didn’t explain fog in the HLSL series.
If I get some time, I hope to add this shader to the series and explain the fog part as well as go over the differences between HLSL and GLSL. The big difference is that Uniforms in GLSL are Constant Buffers in HLSL. In HLSL you get into this register thing that’s kind of confusing and you don’t really bother with that in GLSL but rather just do in and out variables. Some of the syntax is slightly different, but that’s pretty much it as far as the differences. You should really be able to see comparing this code to the HLSL version of the same thing.
I might also mention that I’m not a fan of normalized device coordinates. I’m not sure where you would ever actually use them in a game. Maybe for some 2D HUD stuff or a 2D game? I’m not sure. Since I’m mostly concerned about 3D, I get the view and projection matrices into the mix ASAP and stop worrying about NDCs. If you apply the view and projection matrix (or really just the projection matrix by itself), you’re no longer working with NDCs. All the years I spent in XNA and DX11 in HLSL, I never even heard of NDCs. It was just assumed you always applied the projection matrix whether it was an orthographic or perspective projection matrix. Maybe one day I’ll find a use for NDCs, but that day has not arrived yet. And it annoys me a bit that so many OGL examples spend so much time on them. And it’s a think that in the DX world they don’t seem to acknowledge even exists. It can’t be all that important if there’s no mention of it in the DX world. In my mind it’s something you want to get rid of as quickly as possible by applying that projection matrix and entering the 3D world. Of course if your goal is 2D then maybe you might care a bit more, although I thought you still want to use an orthographic projection matrix instead of working in NDCs. I’ve never really done 2D, so I don’t know.