Add Shapekeys to wearables
under review
Morph Bot
Shapekeys will further enhance creators by enabling simple vertex animations, for example - eye movement/mouth movement for helmet/face wearables.
This combined with our VRM export, or in-world applications, would seriously enhance user interactions via predefined methods, such as driving mouth shape keys via microphone input, or using a webcam/VR headset for body tracking. I believe this will be a necessity for metaverse user interactions and would help future proof our wearables marketplace and models.
These can be enabled alongside .GLB in a similar manner to how blender currently exports natively supported blendshapes/morph targets, I would recommend we follow the current standards for enabled shape keys to be interoperable with all platforms:
ARKit full 72 format: https://arkit-face-blendshapes.com/
With this list, a creator could theoretically enable full locomotion for a wide variety of uses, and make our avatars interoperably competitive with custom VR model rigs (which currently relies on manual work on top of the base Decentraland VRM).
A
Alvaro Luque pilero
under review
A
Alvaro Luque pilero
Hi @Morph Bot
Thank you for your request!
I've been thinking about this for a long time: how to bring facial animations, facial motion capture, expressions and phonemas for voice chat. As you mention VRM expressions and Arkit are definetly the best approach, and would enable us to do this in several steps.
The main problem to resolve, is what to do with the current avatar structure, particularly the head and the NFTs for eyebrows, mouth and eyes. Those wont be compatible with any facial expression, as they are just simple textures and the head geometry does not contain enought geometry to blendshapes consistently. How do we make 2D NFTs facial features compatible with expressions?
Actually, the head is just like any other wearable, but its hidden in the avatar slots. A solution may be having a new slot inside body, where you may select a different "head" piece, that includes the necessary geometry and blendshapes to be compatible with the expressions, but those will automatically disable any 2D NFT.
Another idea would be to create 2D representations in a spritesheet, that replaces the facial features of the 2D NFTs whenever you are doing an action like opening the mouth or blinking. A bit like how Nintendo Miis does expressions.
What are your thoughts regarding this?
Morph Bot
Good question, firstly, I think that all current existing 'wearables' should be left as is, with the option for users to update their creations if they want to attach shape keys - however, as you point out, this will require a way to be compatible with existing avatar infrastructure.
From a technical aspect, I believe we could take the mesh that the texture is currently applied to, and section it out as it's own 3D wearable w/ its own polygon limit users can customize, this is not trivial, but I believe it would be the best solution to enable full vertex animations on these areas, without modifying the existing textures themselves, I do not think the existing wearables need to have shapekeys enabled, but simply be compatible with a new vertex mesh that goes in the 'eye area' and would override the existing eye mesh & textures.
Users could then modify these, as long as the points the connect to the rest of the head is standardized, similar to where our top-body/bottom-body/head connects, we may see some overlap, but I think this could be solved via z-index prioritization when they have the same vertex positions to present z-flickering (prioritizing the eye mesh over the base head mesh for example).
Alternatively, we can leave all existing 2D slots as is, and enable a new slot that would go on top and remove the existing textures, or support shape keys via head overrides, such as the wonderbot helmet.
However, I do believe that having swappable eyes/mouths is the superior albeit more technically challenging path, and would be great if people can expand upon this to get creative (i.e. a duck shaped beak that can have shapekeys applied, as long as it 'connects' and is weighted correctly to the base mesh area of the standard avatar head).
The third option would be to enable both, and essentially have logic where if the model does not have a mesh, it applies the texture as is tradition, but if a mesh is present, we have a preconfigured part of the 'base head' that has the mesh removed and replaced with the 3d wearable.
Morph Bot
Here's a more accurate breakdown of the ARKit shapekeys direct from Apple: https://developer.apple.com/documentation/arkit/arfaceanchor/blendshapelocation