I used `all-MiniLM-L6-v2` embeddings for the top 10k most frequent words, then reduced dimensionality to 2d hyperbolic space by following this doc as reference: https://umap-learn.readthedocs.io/en/latest/embedding_space.html
Then, as shown in the doc, mapped into a Poincaré disk