Comment by curl-up
I assume that by "ChatGPT embeddings" you mean OpenAI embedding models. In that case, "burning" and "freezing" are not opposite at all, with a cosine similarity of 0.46 (running on text-embedding-large-3 with 1024 dimensions). "Perfectly opposite" embeddings would have a similarity of -1.
It's a common mistake people make, thinking that words that have the opposite meaning will have opposite embeddings. Instead, words with opposite meanings have a lot in common, e.g. both "burning" and "freezing" are related to temperature, physics, they're both english words, they're both words that can be a verb, a noun and an adjective (not that many such words), they're both spelled correctly, etc. All these features end up being a part of the embedding.
This might be a dumb question but... if I get the embeddings of words with a common theme like "burning", "warm", "cool", "freezing", would I be able to relatively well fit an arc (or line) between them? So that if I interpolate along that arc/line, I get vectors close to "hot" and "cold"?