FPS drops when maximising the window

Hi,

Why when I maximise the window of my OpenGL application the FPS drops?

Is there any explanation for that?

Thanks

If it takes one man one hour to dig one ditch, how many hours does it take him to dig 11 ditches?

Simple explanation: a larger window means more pixels to be drawn, means more work needs to be done, means less work can be done in the same amount of time.

Yes but I am only rendering one cube. Still logic?

One cube might occupy 200 pixels in a normal window, it might occupy 800 pixels in a maximized window. That would be 4 times the work. Your glClear call will have to clear a larger area. If your SwapBuffers call needs to do a copy, that’s more work too.

More screen space == more pixels == more work == more time == lower framerate.

The cube still the same size and in the same position. It doesn’t even move to the centre when I maximise

Then the question is irrelevant. You are rendering a trivial object, likely using an equally trivial shader. You are in no way stressing the GPU, so any appearance of performance degradation is either extremely tiny (ie: 7500 -> 7400 fps) or is otherwise not indicative of any actual problem.

In which case, I suspect that you aren’t updating the viewport.

The change in frame rate is probably a consequence of the time taken to copy the back buffer to the screen. If that’s the case, you should see the same result without a cube, just glClear.

Not near enough info here. Without any code, details, context, or environment, this could be anything.

  1. VSync OFF, right?
  2. How are you timing your frames?
    • CPU timers or GPU timers?
    • If you’re not timing from and to the exact same point in your frameloop, the point right after SwapBuffers() and glClear() of the window for the next frame, go back and fix that.
    • Post the timing code, including the SwapBuffers() and glClear() of the window for the next frame.
    • In fact, if your test program is simple, just post the whole thing
      (mark it as preformatted: select it and click the </> toolbar button, or enclose the code in triple-backticks: ```)
  3. What do you mean when you say “maximize”? Windowed or borderless? Focused or not?
  4. And for context, what GPU, GL driver, and OS?

(Question #3-4 is to find out if MS Windows flip present mode behavior might be in-play here.)

so, I’m using an intel HD Graphcs 5500 and I updated it 2 days ago (i use windows 10). In the advancd settings, I cannot set the V-Sync because it says: This display doesn’t support any of the advanced settings.
here is a piece of my code:

while (!glfwWindowShouldClose(window))
	{
		glfwGetWindowSize(window, &width, &heigh);
		glViewport(0, 0, width, heigh);
		/* Render here */
		

		glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)

		glUseProgram(program);

		for(size_t i = 0; i < myMeshes.size(); i++)
		{
			

			glUniform1i(glGetUniformLocation(program, "ourTexture"), 0);

			glActiveTexture(GL_TEXTURE0);
			glBindTexture(GL_TEXTURE_2D, textureArray[i]);


			
			int lightColorLoc = glGetUniformLocation(program, "lightColor");
			glUniformMatrix4fv(lightColorLoc, 1, GL_FALSE, glm::value_ptr(glm::vec3(1.0f, 0.0f, 0.0f)));

			int objectColorLoc = glGetUniformLocation(program, "objectColor");
 			glUniformMatrix4fv(objectColorLoc, 1, GL_FALSE, glm::value_ptr(glm::vec3(1.0f, 0.5f, 0.31f)));

			glm::vec3 lightPos(1.2f, 1.0f, 2.0f);
			int lightPosLoc = glGetUniformLocation(program, "lightPos");
			glUniformMatrix4fv(lightPosLoc, 1, GL_FALSE, glm::value_ptr(lightPos));


			glm::mat4 view = glm::mat4(1.0f); // identity matrix
			glm::mat4 projection = glm::mat4(1.0f);
			projection = glm::perspective(glm::radians(45.0f), (float)width / (float)heigh, 0.1f, 100.0f);
			if (i == 1)
			{
				view = glm::translate(view, glm::vec3(0.8f, 0.665f, -2.83f));
				view = glm::translate(view, glm::vec3(0.8f, 0.665f, -2.83f));
				view = glm::translate(view, glm::vec3(0.8f, 0.665f, -2.83f));
			}
			else
			{ 
				view = glm::translate(view, glm::vec3(0.0f, 0.0f, transZ));
				view = glm::translate(view, glm::vec3(0.0f, 0.0f, -3.0f));
				view = glm::translate(view, glm::vec3(0.0f, 0.0f, -3.0f));
			}
			



			int viewLoc = glGetUniformLocation(program, "view");
			glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));

			int projectionLoc = glGetUniformLocation(program, "projection");
			glUniformMatrix4fv(projectionLoc, 1, GL_FALSE, glm::value_ptr(projection));

			model = glm::rotate(model, glm::radians(rotX), glm::vec3(1.0f, 0.0f, 0.0f));
			model = glm::rotate(model, glm::radians(rotY), glm::vec3(.0f, 1.0f, .0f));
			model = glm::rotate(model, glm::radians(rotZ), glm::vec3(.0f, 0.0f, 1.0f));
			int modelLoc = glGetUniformLocation(program, "model");
			glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(model));

			glBindVertexArray(VAOArray[i]);


			glDrawArrays(GL_TRIANGLES, 0, myMeshes.at(i).Indices.size());
		}

		/* Swap front and back buffers */
		glfwSwapBuffers(window);

		/* Poll for and process events */
		glfwPollEvents();
		glfwSetKeyCallback(window, key_callback);
		glfwSetScrollCallback(window, scroll_callback);
		
	}

what I mean by maximise is: when make the GLFW window bigger

Ok. To double-check your driver install, run this:

and verify that it yields OpenGL support similar to this report:

No, V-Sync is as old as the hills (pre-2000), and your laptop embedded Intel GPU (circa 2015) almost certainly supports it. See this Intel article for how to disable V-Sync:

I think you’re confusing this with

I’m not talking about those.

(As a complete aside: the first 3 are newer adaptive display refresh rate techniques, while the latter is HW video encode/decode acceleration. Your old HD 5000 is too old for Adaptive Sync, and as I understand it won’t support G-Sync unless you have an onboard NVIDIA dGPU with a mux so it can drive the video output. In these older generation Intel GPUs, Optimus and G-Sync are mutually exclusive. Adaptive Optimus in newer Intel GPUs alleges to support both, but it has a mux.)

If you’re curious, see these pages which describe V-Sync, G-Sync, FreeSync, and Adaptive Sync w.r.t. monitor tech:

Ok, so this?: (which I don’t see in your code)

Or just stretching the window using the window border, which isn’t maximizing.

And how are you timing your frames?

Also, probably irrelevant to your question, but why are you calling these every frameloop?:

1 Like

What do you mean by timing a frame?
I use these 2 functions to get events from mouse and keyboard

You’re collecting an FPS somehow that you’re placing faith in. Where exactly is this coming from?

Also, what values are you getting, before and after?

(FPS = frames per unit time, the reciprocal of time per frame … thus “timing a frame”. BTW, FPS is a poor metric to profile performance with, for so many reasons. I would collect time per frame, in msec. Profile and optimize with that.)

1 Like