Comment by dymk

Comment by dymk 4 days ago

2 replies

Is it using structured light / lidar of the iPhone, or just the camera? I don’t know how the project works, but calling out iPhone specifically makes me think it’s using a hardware feature that isn’t in a generic webcam.

dagmx 4 days ago

It’s specifically using the ARKit facial tracking that gives you FACS blend shape values

https://developer.apple.com/documentation/ARKit/tracking-and...

This plugin to blender is basically just receiving those values from the OS API and applying it. It’s a fairly common integration and almost all alternatives depend on ARKit on an iPhone as a result rather than implementing any algorithms themselves.

Variations of this plugins functionality have existed since the introduction of the iPhone X in 2017.

s1mplicissimus 4 days ago

the face recognition trick (generating a 3d vertex mesh for the video) should also be doable with a homelab setup. i assume lidar would improve the signal a lot adding factually correct depth values though.