Is the camera good enough for mouth animations? Are there Developer Tools for this? Or is it just a Gray Scale 400x400 sensor data?
Hi, I'm just looking for clarity. I see the following documentation.
I am wondering if HP provides any utility that already translates the face/mouth tracked camera data into some other type of recognized data... such as predetermined mouth gesture recognition or animation style tracked bone transform data? It would seem it doesn't, but I want to be sure on the state of this. If no, is there any chance we will see such Software Developer Utilities in the future.
Please see our latest dataset and whitepaper on using the face camera for avatar recognition. This is informational, there is no Omnicept SDK release that supports this functionality but we've provided pointers and references for the work:
At the present moment, you are correct - the only value you are getting from the mouth camera is the raw Grayscale images. HP Omnicept team is working to provide avatar APIs late 2021/early 2022. If you are signed up on our console, you will get an email with this feature availability. Let me know if you have specific requests/insights you'd like me to feed to the dev team on this.
Any update on the ETA for this avatar API? Also, any indication of what kind of data we can expect?0
Hi Lauren Domingo! Is there any news on this?
We have an Omnicept here, but so far the Face Camera image is pretty useless, except looking nice... :-/
We're using Unity.
We tried to apply other existing Face Capture software - but they generally have trouble recognizing anything since only a fraction of the face is visible, and the image is quite distorted due to the wide angle and close distance.
It would really help if you had some pointers concerning the best way on how to process this camera image, and how to extract face tracking data from it.
Or maybe there are at least some examples out there which already accomplish this task?
Thank you for any help!
We have worked on this within our team and achieved promising results, but since then there have been Omnicept changes (https://developers.hp.com/omnicept-xr/blog/omnicept-changes).
I am going to reach out to our research team to see if they can provide pointers referenced for expression recognition given mouth images.0
Hi Lauren Domingo,
are there any updates on this topic regarding the beforementioned avatar APIs?
Please sign in to leave a comment.