First of all, you should not use double for anything, they are not supported in hardware, will be converted to floats anyway and therefore only cost performance with no precision benefit. As all modern GPUs use IEEE floats internally, they (floats ) are usually a good choice. It is also possible that GPUs natively accept vertex attributes as signed/unsigned bytes (usually used for normals and colors); but this is only a guess on my side. If the GPU cannot read the type directly, the conversion will be performed in the driver, resulting in a performance loss.
Yes it does. If you have bigger data types, the memory consumption increases and performance might decrease. There is additional catch which is more significant than that. If you use format which is not supported by the HW, the driver will have to convert the data which can significantly reduce performance. Especially if you use such format when vertex buffer objects are used, the performance hit will be very big.