Comment by reactordev

Comment by reactordev 4 days ago

6 replies

Yessssssss!!!!! However, we need something that doesn’t rely on iPhone. We need webcam. You can use your iPhone as a webcam. You can also use more powerful video devices as a webcam. I would love a DIY mudface map that’s a b/w displacement map so you can capture the wrinkles of the face and map that with blender trackers. Seriously though, this is a huge leap towards that future.

dagmx 4 days ago

This repo doesn’t provide any computer vision algorithms. It’s taking the values the phone is providing for facial activations.

You’re asking for a different project altogether.

syntaxing 4 days ago

As other has said, it’s using the iOS facial detection API that uses the front true depth camera (aka, the camera used for FaceID)

dymk 4 days ago

Is it using structured light / lidar of the iPhone, or just the camera? I don’t know how the project works, but calling out iPhone specifically makes me think it’s using a hardware feature that isn’t in a generic webcam.

  • dagmx 4 days ago

    It’s specifically using the ARKit facial tracking that gives you FACS blend shape values

    https://developer.apple.com/documentation/ARKit/tracking-and...

    This plugin to blender is basically just receiving those values from the OS API and applying it. It’s a fairly common integration and almost all alternatives depend on ARKit on an iPhone as a result rather than implementing any algorithms themselves.

    Variations of this plugins functionality have existed since the introduction of the iPhone X in 2017.

  • s1mplicissimus 4 days ago

    the face recognition trick (generating a 3d vertex mesh for the video) should also be doable with a homelab setup. i assume lidar would improve the signal a lot adding factually correct depth values though.