New GL spec means: new begging.

And, I can’t say this enough, use the correct PREFIX with the correct numbers.

Any such extension really has no right to be returning any units. It should be returning integers of a known unit, not strings with units attached.

Furthermore, it’s a suffix, not a prefix. Units go at the end, not the beginning.

This is crap, propagated by hard drive and other digital storage manufacturers.

No, it is “crap” propagated by people who know what the metric system is.

Long before even the difference engine existed, there was the metric system. It defined what the prefixes “kilo”, “mega”, “giga”, etc meant.

Programmers do not get to redefine the metric system just because it is convenient for them. Meters, liters, grams, mols, lightyears, etc, all conform to the metric system: when you put “kilo” in front of them, it means 1000 of them. Bytes don’t get an exemption just because it is convenient. Computer “kilobytes” are not 1000, therefore they are not kilobytes, and no amount of programmer inertia, inflexibility and whining will change this fact.

Personally, I don’t like KiB, MiB, and such; the names seem silly and difficult to pronounce. But I do know and understand why they exist. And my personal discomfort with new terminology does not in any way weaken the argument for their existence.

[quote="Alfonse Reinheart
"]
Programmers do not get to redefine the metric system just because it is convenient for them.
[/quote]
Bits & Bytes were never part of the metric system.
IBM and others were using k=1024 when talking about Bytes of core memory in the early 60’s, then the memory chip manufacturers standardised on this useage, making it accepted practice for DECADES.
KiB & MiB were not even proposed until 1995, and were not adopted by the IEC and NIST until 1999.
Some standards authorities like JEDEC still dont accept the change, and i have never seen a magazine article, reference manual or advertisement that uses KiB/MiB for computer memory.
Even the hard disk manufacturers still use M=10^20 when refering to their cache memory.

Thats simply not true.
Capital B has always meant Byte, lowercase b has always meant bit, and lowercase m has always meant 1/1000th.
Using mb for MegaByte started when magazine journalists got new word processors with built-in spell checkers that helpfully detected unusual capitalisation and automatically changed it to lower case when they hit the space bar.
Some of them didn’t notice that whenever they typed MB it magically changed to mb, and once a few magazines had published this same mistake people started to assume it must be correct.

That’s all nice and stuff, but whenever a piece of code tells you a size of memory, that simply has to be done in Bytes. Period.

Everything else is for the convenience of the person sitting at the PC.

Jan.

Thats simply not true.
Capital B has always meant Byte, lowercase b has always meant bit[/QUOTE]
Yeah, thought about that after the fact. True in some context b and B are distinguished as bit and byte (particularly memory sticks/chips), though that’s not universal as you point out so you have to use context to determine what ‘b’ is. While I personally always use capital for byte in written text, have seen others (not using desktop pub S/W but raw text ed) use mb for MB.

I can’t believe this thread has degenerated, so I will add to it:

http://xkcd.com/394/

enjoy.

Ahhh - So those RAM chips with an extra parity bit per byte should be in KBa :smiley:
Then there are all those old computers with 5, 6, 7, 8, 9, or 16 bit bytes, so shouldn’t we be using KiO (Kibi-Octet) to avoid all confusion :sorrow:
If the IEC really want people to change to a less confusing notation then they really need to come up with something thats easier to say and doesn’t sound so harsh.

Shall I make a seperate thread for this junk?

Totally agree that it should only returns whole numbers without prefixes.

And Alfonse Reinheart, I said prefix is because it’s in front of the Unit.

I’m NOT speaking about UNITS at all!

It is about MISUSE of K, M, G and so on.

For the people who don’t know yet:
It’s not that computers are binary that you can’t have 10bits or 100bytes. It is not more complicated for the logical gates of the computer to divide by 1000 than it is to divide by 1024.

It’s not that Windows reports it this way, that you have to follow that. windows != computer

Stop messing the number of combinations that you can have with a number of bit up with the number of bits. It’s really dumb and annoying.

Most people think it’s 1000^x and wrongfully assume so anyway.

It’s the programmers that are misusing these things and it has to stop. Nobody likes this K almost everywhere =1000 but oh if it’s with byte or bit, we want our special circle meaning to be special or something. I hate %!#!=dumb\ this crap!!

The article is:
http://www.geeks3d.com/20100531/programm…sage-in-opengl/

Then why didn’t you came up with something better/different?

Does anybody want to give a reasonable example?

Oh wait, k=1024-people are not able to come up with a better alternative. Or not? But why wouldn’t you have said it then, when it mattered?

You didn’t care when that organization was looking for good names. Now when they have made a new system you’re all bashing it. I find them sounding rather strange to pronounce but that won’t stop me from using them.

@Dark Photon

Please learn about computers some more before blabering about such utter nonsense.

Actually hard drive makers put on the box that they mean G = 1000^3.

If you compare the raw full numbers (not the abbreviated ones) in windows then you’ll see it’ll match up.

I see that you don’t actually understand that although computers work binary, the amount of bytes does NOT HAVE to be a multiple of two. Those conventions started after sizes became larger than thousands. Long after computers where invented. The first computers just showed numbers without prefixes. Programmers where a bit lazy, also had to do memory mappings from RAM to HDD. They used 1024 to be able to calculate more with whole numbers which was easier at the time and the difference wasn’t significant. Now with TB the difference is getting really big in comparison with the amount. (Around 10 percent)

@Everybody:
Mac is using them in 10.6 for hard drive space, Linux kernel is using them from 2001 and now distributions and userspace programs are going to be adopting this system. https://wiki.ubuntu.com/UnitsPolicy

JEDEC is in the newest revisions adding notes about recommending the new system.

TLDR, but oh boy…

"It is not more complicated for the logical gates of the computer to divide by 1000 than it is to divide by 1024. "

Yeah, well, compare the cycles for a DIV and a SHIFT, you’ll see.

@Jan

OMG!! x cycles more?!?
Now the performance of my Nvidia Tesla supercomputing cluster is totally ruined! lol

Now serious, if you want to have the best performance.

Remove the graphic interface from your operating system.
Have you any idea how much cpu cycles that uses. Also stop using a graphical browser and media player. We can read books, right?
It’ll save a lot of cpu cycles.

For these few calculations, it’s no big deal.
Or is this the kind of stuff that knocks your cpu off it’s socks? If so, it’s best to consider a technical problem (HW or software related). Or upgrading.

What about division by a power of two as a division.
Probably happens in not well-optimized Operating Systems.
Not going to call names just yet (hint: Microsoft).

Frankly, in a good cpu, a division and a shift instruction should both only cost one cycle.

Wow, someone needs to go study some microelectronics and assembly, urgently.
A victim of the Java/script generation, I guess?

How dare you mix Java and Javascript in the same sentence !
:slight_smile:
(I mean as this thread is already way out topic, why not continue a bit more …)

@Gedolo: I wasn’t saying that it MATTERS (nowadays). I was simply telling you, that you are wrong.

Now please continue with your blabbering (i’m on vacation for the next two weeks, i don’t care).

Jan.

Big blunder of me in page 7.
It was late. Was very tired.

Now something serious again.
This article: http://www.geeks3d.com/20100629/test-opengl-geometry-instancing-geforce-gtx-480-vs-radeon-hd-5870/

Talks about geometry instancing.
The demo discussed in the article has a few modes.
At one mode there is a limitation that limits the number of objects/call that can be done.
It would be handy for that person if there is something to get the maximum amount on the current GPU. This way the code can be written more robust, scalable, compatible, better performing.

Quote:

F5 key: geometry instancing: it’s the real hardware instancing (HW GI). There is one source for geometry (a mesh) and rendering is done by batchs of 64 instances per draw-call. Actually on NVIDIA hardware, 400 instances can be rendered with one draw call but that does not work on ATI due to the limitation of the number of vertex uniforms. 64 instances per batch work fine on both ATI and NVIDIA. The tranformation matrix is computed on the GPU and per-batch data is passed via uniform arrays: there is an uniforn array of vec4 for positions and another vec4 array for rotations. OpenGL rendering uses the glDrawElementsInstancedARB() function. The GL_ARB_draw_instanced extension is required. The HW GI allows to drastically reduce the number of draw calls: for the 20,000-asteroid belt, we have 20000/64 = 313 draw calls instead of 20,000.

Maybe it’s best when having functions that return memory size.
Only returning whole numbers in bytes and bits.
Or would it be better to only do bits? That way it’s exactly the right size and maximum accuracy down to the bit.