Masking with Shaders vs Blending

I am working with opengl es 2.0/3.0 for iOS and am trying to figure out the best way to do masking. Right now I have two paths that I am trying out.

For context, I am trying to add a smaller image properly positioned and masked to a larger animating image/sprite/gif.

Path 1: Use a shader with two different textures. One being the mask and the other being the image to be masked. The problem I am having here is that the modelview matrix of the mask is the same as the background while the modelview of the smaller image is within the larger image. The shader is using the smaller images modelview to set gl_Position. I was wondering how do you convert the location of the smaller image to the location in the larger image inorder to get the proper alpha for masking.

Path 2: Use blending to mask the image properly. I am using the code from this answer (stackoverflow-questions-5097145) I have a few problems here: the white is not acting as a mask and I dont know why (it works okay if add an alpha channel to the mask and use that), the other problem is that the the smaller image in addition to being masked has its own transparency that I want to keep and this method seem to be losing it.

Which path will provide me with better/faster/cleaner code or is just generally suggested and what can I do to solve the issues with that path? If there is a better way to do this kind of thing I am also open to trying that.

modelview matrix of the mask

In a vertex shade there is nothing stopping you having 2 modelview matrices.

Right I was able to send in multiple modelview matrices in. I just didn’t understand what to do next. How would I translate the texture coordinate from one modelview to the other so that I can get the proper mask value?

I don’t understand your question. You said you had 2 model matrices. When you apply one to the mask and the other to the image; the mask and image coordinates are both on camera space. I assume your mask is some HUD cut out.