Shapekeys will further enhance creators by enabling simple vertex animations, for example - eye movement/mouth movement for helmet/face wearables.
This combined with our VRM export, or in-world applications, would seriously enhance user interactions via predefined methods, such as driving mouth shape keys via microphone input, or using a webcam/VR headset for body tracking. I believe this will be a necessity for metaverse user interactions and would help future proof our wearables marketplace and models.
These can be enabled alongside .GLB in a similar manner to how blender currently exports natively supported blendshapes/morph targets, I would recommend we follow the current standards for enabled shape keys to be interoperable with all platforms:
With this list, a creator could theoretically enable full locomotion for a wide variety of uses, and make our avatars interoperably competitive with custom VR model rigs (which currently relies on manual work on top of the base Decentraland VRM).