Apple's Cubify Anything: Scaling Indoor 3D Object Detection
(github.com)181 points by Tycho87 4 days ago
181 points by Tycho87 4 days ago
In case anyone is interested in rendering USDZ scans in Three.js, I created a demo: https://usdz-threejs-viewer.vercel.app/
There is one in the Three.JS example suite with source:
They overcomplicate by using 3-4 different (sub) license in one project:
in README:
Licenses - The sample code is released under Apple Sample Code License.
- The data is released under CC-by-NC-ND.
- The models are released under Apple ML Research Model Terms of Use.
Acknowledgements
- We use and acknowledge contributions from multiple open-source projects in ACKNOWLEDGEMENTS."
then having in github license button "Copyright (C) 2025 Apple Inc. All Rights Reserved."
in repo file LICENSE LICENSE_MODEL
why making it so confusing and elaborate? Its so useless to even use by 3rd party devs for making apps and releasing on their platform. So then just make it one license with the most strict restrictions you can make AGPL and/or CC-by-NC-ND .
Why isn't CC-by-NC-ND adequate for code? Kinda makes sense IMO and the summary looks useful?
> CC-BY-NC-ND is a type of Creative Commons license that allows others to use a work non-commercially, but they cannot modify it or create derivative works. This means the original work can be shared, but it must remain unchanged and cannot be used for commercial purposes.
Notwithstanding it's only applied to the data in this case, it sure looks like a useful license for code.
Looks promising but the license, Attribution-NonCommercial-NoDerivatives is pretty limiting..
I keep meaning to get back to my suite of equirectangular image functions - viewers, editors, authoring etc. and this reminded me to resurrect the Viewer.
https://equinaut.surge.sh/?eqr=https://raw.githubusercontent...
Not quite right I think because the source image issn't 2x1 aspect ratio.
They can look really nice: both in the real world - https://equinaut.surge.sh/?eqr=https://upload.wikimedia.org/...
or
the virtual world: https://equinaut.surge.sh/?eqr=https://live.staticflickr.com...
The accuracy of the results don't seem that great. For example, looking at the pictures on the wall in their sample, or the beams in the ceiling.
It's possible it's some artifact of the processing resolution, but I think most people that have worked with NNs for AR input will be surprised that this is not considered disappointing.
> The accuracy of the results don't seem that great. For example, looking at the pictures on the wall in their sample, or the beams in the ceiling.
Do you mean the accuracy of the classification or the precision of the lidar scans?
In my experience the lidar precision on the iPhones is decent but not great, so the texture mapping can look a bit off at times.
I'd love to have these bounding boxes on my scans though.
I mean the accuracy with which it's locating the bounds. What is extra curious is it obviously supports rotated cubes, yet it often doesn't use them when it should, leading to overstating the bounds, as if it's over enthusiastically trying to put things aligned to some inferred axis.
This is obviously an attempt at the general case to apply cubes to anything, but what is disappointing is the performance on boxy objects is lower than I've seen on private NNs used for AR and CV for years (ironically enough on iPads), using just rgb and no depth.
I half think the exercise here was to establish if transformers were the way to go for this, and on the strength of that the answer would be probably not.
Will it work on a picture of a Power Mac G4 Cube[0]? Whenever I see "cube" and "apple" together (which, in fairness, is rare), I think of the Cube.
I really want an app I can scan my whole house with the camera/lidar combo on my phone, and export it into Blender, where I can then rearrange furniture and stuff. Apps like Scaniverse get you pretty close, but everything is one mesh, would be great to be able to slide the couch around the space without having the manually cut it out of the mesh.