Enforce speeds and focus on 2D! and More

I think many here confused themselves with what a high vs. low level API is.

When we say high-level 3D API, we usually refer to something similar to old Direct3D intermediate mode, the Java3D, the Ogre3D “engine” :), Renderman, or even scene graph libraries such as SceneGraph.

However both OpenGL and Direct3D are low, and very low, level 3D APIs. Both abstract the graphics hardware functionality, but at different levels of abstraction.

OpenGL is more abstract than Direct3D, and this does not mean you get more control over the functionality, or that the other API is less or more capable. Usually less abstraction can lead to more headache and extra housekeeping work on the API user, such as in the case of Direct3D lost devices…and other window set-up code.

I agree here that OpenGL would benefit a lot if it had support to accelerated 2D functionalities, whatever provided by the hardware, like in Direct2D :slight_smile:

Unfortunately OpenGL is giving up several features to its competitor, but it’s still winning! It’s the GOD of Computer Graphics!

Well it might be experimental and might work extremely well on a few cards but wont be running any faster than most calls? or put better it willl just be some additional thing that most software devs will overlook (on purpose). OpenGL probably ensures GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS to be a minimum value to inform software devs, it would be the same thing or irrelevant if it was Nvidia and ATI demanding the minimums, but either way it has an impact to software developers, an impact that OpenGL should care about or have an interest in.

[QUOTE=mhagain;1237221]You’ve got it completely backwards. Hardware doesn’t know or care whether it’s OpenGL, D3D, or something else entirely different. Your OpenGL driver converts calls into some vendor-specific format that the hardware understands, then sends them on to the hardware for actually drawing stuff. At the most basic level hardware is just a bunch of logic gates and other gubbins; there’s nothing in hardware that knows or understands a glBlendFunc call, for example. There is a hardware blending unit, and your glBlendFunc call gets translated by the driver into something that sets parameters for the blending unit. What that “something” is is none of your business; it’s entirely hardware-dependent and that “something” is allowed to vary across different hardware, it’s entirely a property of the hardware, and is no different for any API. The blending unit itself doesn’t care about OpenGL, much the same way that your CPU doesn’t care about the OS it’s running.

OpenGL is implemented by the driver, which is software, not hardware, and that’s where it starts and ends. Beyond that point hardware takes over and OpenGL is completely irrelevant to any further discussion of what does or doesn’t happen.[/QUOTE]
Actually ive got it right, lookup the word emulated. Hardware is the approximation of OpenGL. (It runs to the OpenGL Spec and may add its own extensions beyond that) OpenGL to low-level software to Hardware can be considered C to ASM to Binary. The point is here hardware vendors conform to the OpenGL Spec. Im pretty sure OpenGL doesnt convert the code to low-level thats the job of hardware vendors to my knowledge. (If your referring to the egg and chicken then its irrelevant (to how OpenGL came about)) OpenGL is all about conformance (being cross-platform), and thats why its very bad for OpenGL to be hardware specific or for software developers to utilise hardware specific code (excluding consoles). And for the above this is why i call OpenGL high-level.

^ actually im abit wrong there because programs get complied into Binary and all executables to my knowledge work on the same OS across all sorts of computers. But its still the same thing anyway.

Your saying newer hardware cant handle fast 2D? Beyond that what you’ve just stated is that xlib is a better candidate for 2D than OpenGL, which given all its gibberish is both true and untrue.

Blitting or just simply copying image data from a to b is one of the most basic functions/methods possible and your saying modern-day graphic cards cant handle this??? On top of that simple stuff like having a 2D rendering process, or 2D drawing function that goes through certain states (a bitmask for whether to render or not or even just through the shader) and then just gets mapped to 0.0 to 1.0 in both x and y on the screen, why would this be impossible for hardware developers to emulate?

The main point is if OpenGL tomorrow turned around and stated that for the next version there going to add speedy, clear & concise, 2D, yada yada to the core, perhaps existing OpenGL drivers might not be able to follow suite, and wont be able to support future versions (which is very unlikely) however, when that happens then all the future graphic card vendors will say “Ok, OpenGL demands this, so were gonna have to come up with the hardware to support this” (presuming its reasonable enough) in future cards. + not allowing hardware vendors to get away with cheap imitations of hardware versions with very slow software, then they could incorp the performance stamps, and theres probably something already there anyway.

For image loading allowing the hardware vendors to take over with simplistic calls (that doesnt take away anyway functionality), will enable hardware code to be smarter about how that data gets from the harddrive to video memory. (most of it will just conform for the most part to the CPU (software), as in OpenGL or IN glu not avail to software devs to call) but if these were in simplistic high-level functions then bypassing RAM would be possible, which would make loading textures/pixel data far faster to video memory. (so simplistic calls for OpenGL spec and advanced workings for the hardware - it would be up to the hardware on how to retrieve img.tga from the harddrive) and think about built-in graphics cards and CrossFire. Ofcourse a hardware vendor could just do this but then there workings wouldnt work with already released software.

I might not understand too much about OpenGL but i do understand simple logistics.

[QUOTE=fobbix;1237233]Actually ive got it right, lookup the word emulated. Hardware is the approximation of OpenGL. (It runs to the OpenGL Spec and may add its own extensions beyond that) OpenGL to low-level software to Hardware can be considered C to ASM to Binary. The point is here hardware vendors conform to the OpenGL Spec. Im pretty sure OpenGL doesnt convert the code to low-level thats the job of hardware vendors to my knowledge. (If your referring to the egg and chicken then its irrelevant (to how OpenGL came about)) OpenGL is all about conformance (being cross-platform), and thats why its very bad for OpenGL to be hardware specific or for software developers to utilise hardware specific code (excluding consoles). And for the above this is why i call OpenGL high-level.

^ actually im abit wrong there because programs get complied into Binary and all executables to my knowledge work on the same OS across all sorts of computers. But its still the same thing anyway.[/QUOTE]

While it’s true that hardware is designed with OpenGL and D3D in mind, the hardware still has its own command set. OpenGL or D3D API calls are just mapped appropriately to the corresponding hardware functionality, if it exists, or is emulated using the supported hardware features, like in case of most compatibility profile features.

From what you say, it seems that you have no basic understanding of how GPUs work. First, do some research then come back.

[QUOTE=fobbix;1237233]Your saying newer hardware cant handle fast 2D? Beyond that what you’ve just stated is that xlib is a better candidate for 2D than OpenGL, which given all its gibberish is both true and untrue.

Blitting or just simply copying image data from a to b is one of the most basic functions/methods possible and your saying modern-day graphic cards cant handle this??? On top of that simple stuff like having a 2D rendering process, or 2D drawing function that goes through certain states (a bitmask for whether to render or not or even just through the shader) and then just gets mapped to 0.0 to 1.0 in both x and y on the screen, why would this be impossible for hardware developers to emulate?[/QUOTE]

Current harrdware can handle 2D ultra fast, as fast as 3D (considering 2D is just a special case of 3D, as it was mentioned before). Just because you can’t write an application that renders 2D elements fast it doesn’t mean that it’s not possible. Even for very complex 3D scenes, current GPUs can easily render above hundreds of FPS. With a properly written 2D renderer you should get at least the same with any GPU on the market. If you failed to do so, blame yourself, not the OpenGL spec.

OpenGL is for accessing 3D graphics hardware, not for accessing hard-drive or for reading your famous image format. There are tons of libraries out there to accomplish that. OpenGL is not a 3D rendering engine, it is an API for accessing graphics hardware. Just accept this and move on.

Btw, FBO, that you’ve mentioned a couple of times in your posts, is simply an API concept, not a hardware feature. Hardware is able to write the results of the rendering to arbitrary video memory locations, which can be your texture, your renderbuffer or your default framebuffer. The rest is just an abstraction that the spec and thus the driver provides so that you don’t have to deal with the intricate details of how to program the hardware in order to e.g. render to a texture or to multiple textures, etc.

what about if i wanted to 10’s of thousands of units? (with 2D images) whilst having a 3D map, its true though that they’d probably have to be billboards to have the correct depth scaling applied (even though you could get around that. Rendering 2D with 3D might have some side effects though). Still its very annoying for HUD’s and GUI’s which are 99% 2D, and from a latest FPS gen perspective the HUD show have the least performance penalty possible. And then theres the fact that you might want to work in real-time with modifying textures.

My biggest compliant to OpenGL is that it doesnt work like its supposed to on my computer. I think the biggest problem ive had is with PBO’s, ive loaded some in, then rendered them and then tried loading some more in, a reasonable request but OpenGL doesnt like it, the framerate just starts crashing on me with the more i load in, it hate it, its not my fault but ive had to change to FBO’s then textures either wont render to or from, so now ive changed it to rectangle textures and its working fine but i know its not direct, i know its not the fastest it should be, i know ive wasted so much time on the other things cos they was presented to me that they was great for 2D (by books and OpenGL spec).

Ill be honest at first i thought it was me ive messed around with the software image loader ive created alot (breaking and fixing the confusing looking code), ive done an animation loader (which is quite hard) but meh i feel that opengl should already have these functions embedded into OpenGL, at the moment i can only use TarGa’s, not exactly a problem like and perhaps a slight blessing. I find theres too many variables for OpenGL to go wrong esp. with poor book examples and not enough show-how. (something even as simple as forgetting to enable 2D textures in the startup) and when you do come across a problem its very hard to track down. Also with my current graphics card i also have to call glFinish(); every frame? (to stop the high pitch noise, ill be honest i first heard this with DirectX, interesting how ive come over to OpenGL (i couldnt solve it, everytime i went into fullscreen it was there, emitting from my computer, thinking it was something id done i was looking for errors in my code) but meh it requires i put that func in)

So OpenGL doesnt communicate with the CPU? and if there are tons of libraries out there then why doesnt OpenGL call out to them and get them incorp’d into the spec or glu spec anyway.

Ive made my points on this forum and ive wasted a lot of free time here. Its up to OpenGL rather they take an interest or not but i really want those 2D capabilities and i believe ive made some very good suggestions, even if some of them are not apparent to the spec itself or immediately anyway (like OpenGL having its own Operating System, so it can engage games and high-end 3D suites with 100% of the resources, so it can stamp out the cross-platform misapps, and give game/software developers a very easy time with most of the specs being combined and presented in a professional complete manner). More so it wouldnt matter which OS you choose to launch into the OpenGL OS, which for me is a very big thing. (I dont like windows 7 and the pricing is extortionate, it should be £10 per year indefinitely or until they discontinue the OS, and the funny thing about that is that it would cost people more and MS would make more. And they wouldnt have to try and sell crappy new nanny-like OS’s)

Tech for me has moved on, i want to see some very big and positive changes, so i can create some very good games as a designer. !!! Before anybody mentions UDK or Unity !!! im trying to create a great RTS game (2.5D 2D Units and 3D Map with entirely 2D GUI/HUD grrr), but i dont have the man power and the mouse picking problem has stopped me dead, so im continuing on with my gaming zone until i heard some news from multiple places (relating to multiple subjects)

Ive also been going to the gym quite abit aswell and as you’ve probably guessed no im not employed lol, i should be but i have bigger ambitions and dont wanted to be limited. Hopefully ill hear some good news soon.

Its been nice’ish talking to you all, Thankyou for your views.

I guess this is the actual problem you have with OpenGL.
That is the responsibility of the implementation (ie. Nvidia or AMD/ATI or Intel etc, not Khronos Group).

Can you be specific about your current hardware/driver/OS ?

Also with my current graphics card i also have to call glFinish(); every frame? (to stop the high pitch noise, ill be honest i first heard this with DirectX, interesting how ive come over to OpenGL (i couldnt solve it, everytime i went into fullscreen it was there, emitting from my computer, thinking it was something id done i was looking for errors in my code) but meh it requires i put that func in)

glFinish() will totally kill your performance, especially with async texture data transfer.
The noise can come from the card getting hot and starting to accelerate the fan to avoid melting. A starting point to avoid unnecessary work on the card is to use vsync with *SwapBuffers(1) instead of (0).

No, really his problem with OpenGL is that he wants to “blit” sprites, rather than rendering sprites the correct way. That is, with textured quads. The way every other application that uses OpenGL (or D3D) to do 2D sprite drawing does.

[QUOTE=ZbuffeR;1237267]I guess this is the actual problem you have with OpenGL.
That is the responsibility of the implementation (ie. Nvidia or AMD/ATI or Intel etc, not Khronos Group).

Can you be specific about your current hardware/driver/OS ?[/QUOTE]
ATI Radeon HD 5700 Series/Windows 7 Home X86 and Linux - Ubuntu X86

Driver Packaging Version 8.85-110419a-118908C-ATI
Catalyst™ Version 11.5
Provider ATI Technologies Inc.
2D Driver Version 8.01.01.1152
2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/Class/{4D36E968-E325-11CE-BFC1-08002BE10318}/0001
Direct3D Version 7.14.10.0833
OpenGL Version 6.14.10.10750
Catalyst™ Control Center Version 2010.0825.2146.37182

Primary Adapter
Graphics Card Manufacturer Powered by ATI
Graphics Chipset ATI Radeon HD 5700 Series
Device ID 68B8
Vendor 1002

Subsystem ID 2991
Subsystem Vendor ID 1682

Graphics Bus Capability PCI Express 2.0
Maximum Bus Setting PCI Express 2.0 x16

BIOS Version 012.019.000.013
BIOS Part Number 113-HD577XZNFB4-113-C01201-021
BIOS Date 2010/04/14

Memory Size 1024 MB
Memory Type GDDR5

Core Clock in MHz 850 MHz
Memory Clock in MHz 1200 MHz
Total Memory Bandwidth in GByte/s 76.8 GByte/s

Motherboard - EP43T-USB3 (Gigabyte)
CPU - Intel Core 2 Quad - Q9550

I dont put off that it was probably something i was doing wrong myself but the possibilities are endless, i cant be certain about these things. Do you have some test code? or perhaps something i can compile in C++ vs 2010 or C++ codeblocks.

[QUOTE=ZbuffeR;1237267]glFinish() will totally kill your performance, especially with async texture data transfer.
The noise can come from the card getting hot and starting to accelerate the fan to avoid melting. A starting point to avoid unnecessary work on the card is to use vsync with *SwapBuffers(1) instead of (0).[/QUOTE]
The noise is a high-pitch freq that starts immediately when certain programs are run (when a frame is called before the previous frame has finished rendering, its def todo with graphics rendering), its not from the fan, and i have had similar problems with other graphics cards, actually even my mums computer im pretty sure does it aswell. Anyway thanks for the info, it could help alot.

Anyway its quite late and i have to get up early so cya.

A high pitch noise happens when there is a coil that is vibrating. It happens Zbuffer. I have a PC that emits a high pitch noise when I use Linux (Ubuntu) but in Win XP, for some reason it doesn’t do it. The noise was coming from a coil nearby my CPU. Also, when the CPU did some work, the noise would cease. Normally, the manufacturer should design the coil properly or put some silicone or hot melt glue over it to dampen the noise.
I have no idea why it was fine under Win XP.

The coil near the CPU is part of the CPU’s power supply system. Some semiconductor switches pump current into the coil at frequency which depends on the load. When the CPU is not idle it consumes more power and the coil works at higher frequency that goes beyond the human hearing limit (ultrasound). Thats why when your CPU has work to do the coil becomes silent. As for XP, probably it always keeps CPU busy enough to consume enough power to keep the coil silent.
The coil produces sound because it is not glued together rigidly enough to withstand the magnetic forces that try to bend it’s parts and so they vibrate a bit.

Sorry for the off-topic, but as other people already talked about it and it is one of my other interests, i decided to join :slight_smile: