Unresolved external symbol for every glew function I use

Hey guys so I wanted to create new openGl Visual Studio project in 64 bit. So I downloaded glew binaries and glfw binaries. My project settings are as followed:
For Additional Libraries:

  • glew-2.1.0\lib\Release\x64
  • glfw-3.3.bin.WIN64\lib-vc2019

For Additional Dependencies:

  • glew32s.lib (I dont get why its named glew32 and its in x64 folder)
  • opengl32.lib
  • glfw3.lib

With such setting for every glew funcation like glGenBuffers or glUseProgram or anything I get LNK2001 unresolved external.
Any clue where I did mistake?

(And since I am a new to all of these can someone explain why do I have to link opengl32.lib on 64 bit app and why there is no opengl64.lib and so on?)

Well it seems to work when I link it dynamicly (glew32.lib instead of glew32s.lib). Still no clue why static linking didnt work out.

I am working on the same thing.
I also want to know why there is no opengl64.lib
There is really only one place I found some OK information but still not enough
community khronos /t/64-bit-opengl-for-64-bit-windows-xp-amd/51198

I want to make sure I’m using OpenGL as fast as possible. If my application is feeding the graphics card double precision floats will it have to truncate them to single precision? To optimize, should I always send it single floats?

This would be fine for regular graphics rendering which I will mostly be using it for but I was also planning on trying some scientific computing at 64 bits on the GPU. So I got a Titan Z because its really one of the only graphics cards with a high double to single performance ratio. ie. Most graphics cards are designed to support single precision and can do doubles but only at like 1 tenth of the FLOPS or worse. They just dont usually have the hardware optimized for doubles. 1 to 2 ratio is optimal / ideal.

For example
wikipedia /wiki/List_of_Nvidia_graphics_processing_units
takes a while to look through this massive list
if you save that page as a pdf at 25 percent scale it wont cut off any of it
pg. 11 GeForce Best (1080 Ti etc) and
pg. 21 Quadro Best (Tesla V100 etc.)

the 1080 Ti has 10800 FLOPS for singles, and 300 for doubles
the Titan V has 12300 singles, and 6150 doubles
the Tesla V100 14000 singles, 7000 doubles
the only relatively inexpensive ones that have a high double to single ratio are:
Titan 4500 single, 1500 double
Titan Black 5100 single, 1700 double
Titan Z (two T blacks put together) 8100 single, 2700 double

Can anyone help us understand why there is no opengl64.lib ?
What happens when you send double precision numbers to the GPU using openGL functions with the opengl32.lib ? Does it send them to the double precision hardware on the GPU or does it do some weird converting like openGL is a 32 bit bottleneck?

I am also trying out the static linking, haven’t really used any functions yet I will let you know if it works for me.

I’m pretty sure there is a 64 bit version and a 32 bit version of opengl32.lib
Because here it shows them
dlldownloader com/opengl32-dll/
and they’re different sizes

Legacy Microsoft Windows reasons. It’s named opengl32.dll for both the 32-bit and 64-bit versions. … on Windows.

Single-precision will be faster (and more compatible; double-precision attributes were added in 4.1). Note that the CPU-side representation and GPU-side representation aren’t necessarily the same thing. E.g. if you call glVertexAttribPointer with a type of GL_DOUBLE, the values will be converted to single-precision by the GPU (the corresponding GLSL variable needs to have a type of float or vec*). You need to use glVertexAttribLPointer for attributes which are double or dvec* on the GPU.

Historical reasons. The OpenGL library has always been called opengl32.lib since before 64-bit versions of Windows were available.

There’s no difference in behaviour between 32-bit and 64-bit applications. With modern OpenGL, the CPU doesn’t have much involvement beyond sending commands to the GPU and arbitrating memory access. And a 32-bit architecture can handle 64-bit data just fine (the x87 FPU has always had 80-bit registers even since the original 8087, designed for use with a 16-bit CPU).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.