Comment by jadbox

Comment by jadbox 6 days ago

1 reply

How does it work? How does one model on the device get shared to many apps? Does each app have it's own inference sdk running or is there one inference engine shared to many apps (like ollama does). If it's the later, what's the communication protocol to the inference engine?

rshemet 6 days ago

Great question. Currently, each app is sandboxed - so each model file is downloaded inside each app's sandbox. We're working on enabling file sharing across multiple apps so you don't have to redownload the model.

With respect to the inference SDK, yes you'll need to install the (react native/flutter) framework inside each app you're building.

The SDK is very lightweight (our own iOS app is <30MB which includes the inference SDK and a ton of other stuff)