Comment by jjfoooo4
My guess is that they will lean towards smaller models, and try to provide the best experience for running inference on device
My guess is that they will lean towards smaller models, and try to provide the best experience for running inference on device