Adityav369 a day ago

Yeah we had an overload on the ingestion queue. If you try again will be much faster as we just moved to a beefier machine. (The previous ingestion will still work since it is in queue, but new ones will be faster)

  • hliyan 17 hours ago

    Wait, your title says this "runs locally"?

    • ArnavAgrawal03 5 hours ago

      Yes! If you're running the local version and it's taking long, that an indication that your GPU isn't being used properly. This can be traced back to the `colpali_embedding_model.py` file, where you can set the device and attention you want PyTorch to use.