How to turn float position coordinate array into 'pixel' position coordinate array?

So, I wanted to create a rectangular cloth which is draw by drawing a lot of triangles using the triangle strip method. This is how I’ve been creating it:

      Rectangle(0.8,0.4,0.1)

The rectangle constructor looks like this:

       Rectangle::Rectangle(float width, float height, float particleSepDistance)                
      {
       noOfParticlesWidth  = round(width / particleSepDistance) + 1; 
       noOfParticlesHeight = round(height / particleSepDistance) + 1;

            for (int y = noOfParticlesHeight-1; y >= 0; y--)
            {
                      for (int x = 0; x < noOfParticlesWidth; x++) {
                               Particle p(glm::vec3(x * particleSepDistance, yparticleSepDistance, 0.0f), glm::vec2((float)x / (noOfParticlesWidth-1), (float)y / (noOfParticlesHeight-1) ));
            
                               particlesVBO.push_back(p);
             }
     }

So when I output what is inside my particlesVBO, just the positions (Particle has glm::vec3 of position and glm::vec2 TexCoord):

{0, 0.4, 0}, {0.1, 0.4, 0}, {0.2, 0.4, 0}, {0.3, 0.4, 0}, {0.4, 0.4, 0}, {0.5, 0.4, 0},{0.6, 0.4, 0}, {0.7, 0.4, 0},{0.8, 0.4, 0},
{0, 0.3, 0}, {0.1, 0.3, 0},{0.2, 0.3, 0}, {0.3, 0.3, 0}, {0.4, 0.3, 0}, {0.5, 0.3, 0}, {0.6, 0.3, 0},{0.7, 0.3, 0},{0.8, 0.3, 0},
{0, 0.2, 0}, {0.1, 0.2, 0}, {0.2, 0.2, 0},{0.3, 0.2, 0}, {0.4, 0.2, 0}, {0.5, 0.2, 0}, {0.6, 0.2, 0}, {0.7, 0.2, 0},{0.8, 0.2, 0},
{0, 0.1, 0}, {0.1, 0.1, 0}, {0.2, 0.1, 0}, {0.3, 0.1, 0},{0.4, 0.1, 0}, {0.5, 0.1, 0}, {0.6, 0.1, 0}, {0.7, 0.1, 0}, {0.8, 0.1, 0},
{0, 0, 0}, {0.1, 0, 0}, {0.2, 0, 0}, {0.3, 0, 0}, {0.4, 0, 0},{0.5, 0, 0}, {0.6, 0, 0}, {0.7, 0, 0}, {0.8, 0, 0}

Now, that is very understandable for me what these coordinates mean, and I can easily create the EBO and then use triangle strip method to create the rectangle.

What I don’t understand is how you can create something WITHOUT using floats and knowing how large it is or where it is. For example, this person creates a cloth in the following way:

	Cloth(float width, float height, int num_particles_width, int num_particles_height) : num_particles_width(num_particles_width), num_particles_height(num_particles_height)
	{
		particles.resize(num_particles_width*num_particles_height); 

		for(int x=0; x<num_particles_width; x++)
		{
			for(int y=0; y<num_particles_height; y++)
			{
				Vec3 pos = Vec3(width * (x/(float)num_particles_width),
								-height * (y/(float)num_particles_height),
								0);
				particles[y*num_particles_width+x]= Particle(pos);
			}
		}

In the main, they create the cloth like this:
Cloth cloth1(14,10,55,45);

Their full code is available if you search for Mosegaards Cloth Simulation Coding Tutorial

I know that this must be a really dumb question but, how do I change my code to replicate what they have done…? I look at 14 and 10 and I can only assume they mean pixels…? But what would the actual dimensions of the cloth be in terms of the float on the screen that they would be getting? I think I am talking about normals here, but I am so new to this that I really can’t comprehend what actual float size on the screen they are creating here… I want my end float on screen values to be like the ones I’ve given, but I would like particlesVBO to be created from the ‘pixel’ (?) values, so that I can use the pixel, non-normalized values, and then normalize only once all other calculations on the positions are done.

They initialize their window size like this: glutInitWindowSize(1280, 720 );
I actually use glfw, so I initialize my window size like this: glfwCreateWindow(800,600,“name”, NULL, NULL);

Could someone please please give me some insight, or tutorial, a link to something helpful anything at all… I really dont know what to do about this and I am really struggling and stressed out to the point of having pain in my chest and sweating everyday, not being able to understand something so simple…