Old nVidia + hashcat (Kali linux) + CL_DEVICE_LOCAL_MEM_SIZE

Hi 2all. Sorry if i am off topic. Looked over the forum rules - did not find any restrictions. Did not find any answers over Web. Please help.

I am trying to run hashcat (password recovery utility using opencl) on an old laptop (Acer Aspire 5920G) with nVidia GeForce 9500m GS 512Mb under Kali (Debian) Linux. I’ve installed nvidia-legacy-34xx drivers and opencl-icd.

The hashcat do not work, saying that local memory is too small: clGetDeviceInfo(CL_DEVICE_LOCAL_MEM_SIZE) code: if (device_local_mem_size < 32768){}

As I understood this is how much memory can be allocated for each compute unit. But I did not find any such characteristic in datasheets for the GPU or its driver. So here is the questions:

  1. Is it hardware or software defined?
  2. Does it seem to be true that 9500 has this memory size - less then 32K ?
  3. Can it be overriden by OpenCL, CUDA ot driver settings? (using different type of memory or different amount of compute units (512 now))
  4. In which file of source code of opencl for debian there is clGetDeviceInfo()

Thank you

It is defined by hardware, and reflected to application by driver. Some hardware could trade cache size for local memory size.

You can find the values for your device by running clInfo application.

[QUOTE=Dithermaster;44111]It is defined by hardware, and reflected to application by driver. Some hardware could trade cache size for local memory size.

You can find the values for your device by running clInfo application.[/QUOTE]

Yes, I know. I did not mention it for shortness. 16 384 it is. But hashcat uses OpenCL same as clinfo so its abvious that it has equal value.

If you know what you are talking about may be you can tell me where to read where to look? Or can explain in a few words. What kind of memory is it? And what are possible ways to “trade” it? Where to check it on hardware? I assume that this value (16K) is wrong because of versions mismatch, some of modules have 390 in their versions and I use 340 driver.
I have 2 questions actualy: “is it correct?” and “is it possible to increase?”

As Dithermaster said, local memory size is defined by the hardware and there is no way to increase it. On the Tesla architecture on which is based your card, the local memory per streaming processor is 16KB, no more.
On more recent NVidia architectures, local memory and L1 cache memory share the same hardware and it is possible to tune how much memory is allocated for local memory and L1 cache. However, AFAIK, this can only be done with CUDA, not OpenCL. And the Tesla architecture doesn’t support such a functionality anyway.
You can have a look at the Tesla architecture here: NVIDIA's GT200: Inside a Parallel Processor
(local memory is called ‘shared memory’ following NVidia conventions)

Thank you!