Websites where you can download free and paid human generated 3D assets:
https://downloadfree3d.com (my favourite)
https://www.turbosquid.com
https://www.cgtrader.com
https://sketchfab.com
https://www.thingiverse.com
https://free3d.com
https://poly.cam/explore
https://3d.si.edu/explore
Although these days I am more into 3D content from generative AI (text-to-3D):
https://imageto3d.org
https://meshy.ai
https://lumalabs.ai/genie
https://www.sudo.ai/3dgen
https://www.tripo3d.ai/
https://3d.csm.ai/
So much so that I have created two assets packs of hand picked content from two of these generative services:
https://archive.org/details/@mrbid
https://archive.org/details/meshy-collection-1.7z (800 unique assets)
https://archive.org/details/luma-generosity-collection-1.7z (3,700 unique assets)
Details concerning Generative AI:
The forefront/SOTA of this technology is maintained by a project called ThreeStudio at the moment.
Typically what happens is Stable Diffusion is used to generate consistent images of the same object from different viewing angles however regular Stable Diffusion models are not capable of this and so Zero123++ is used for this purpose. Once these images of the object have been produced from different view/camera angles they are then fed into a Neural Radiance Field (NeRF) this will output a point-cloud of densities and NerfAcc is commonly used for this purpose by most projects. Finally the point cloud is turned into a triangulated mesh using Nvidia's DMTet.
Stable-Dreamfusion can be attributed as the first project that really kicked off this academic field for text-to-3D solutions, and while there are pre-canned image-to-3D solutions available currently they tend not to perform quite as well as most public text-to-3D solutions.
If you are interested in learning more about generative 3D here are some links you can followup:
Various papers and git repositories related to the topic of text-to-3d:
https://paperswithcode.com/task/text-to-3d
https://github.com/topics/text-to-3d
Pre-Canned solutions to execute the entire process for you:
https://github.com/threestudio-project/threestudio (ThreeStudio)
https://github.com/bytedance/MVDream-threestudio
https://github.com/THU-LYJ-Lab/T3Bench
https://arxiv.org/pdf/2209.14988.pdf (DreamFusion)
https://dreamfusion3d.github.io/
https://arxiv.org/pdf/2211.10440.pdf (Magic3D)
https://research.nvidia.com/labs/dir/magic3d/
https://arxiv.org/pdf/2305.16213.pdf (ProlificDreamer)
https://arxiv.org/pdf/2106.09685.pdf (has a LoRA step)
https://ml.cs.tsinghua.edu.cn/prolificdreamer/
https://arxiv.org/pdf/2303.13873.pdf (Fantasia3D)
https://fantasia3d.github.io/
https://research.nvidia.com/labs/toronto-ai/ATT3D/
https://research.nvidia.com/labs/toronto-ai/GET3D/
Stable Diffusion:
https://easydiffusion.github.io/
https://civitai.com/
https://nightcafe.studio
https://starryai.com/
https://dreamlike.art/
https://www.mage.space/
https://www.midjourney.com/showcase
https://lexica.art/
Zero-shot generation of consistent images of the same object:
https://github.com/cvlab-columbia/zero123
https://zero123.cs.columbia.edu/
https://github.com/SUDO-AI-3D/zero123plus
https://github.com/One-2-3-45/One-2-3-45
https://one-2-3-45.github.io/
https://github.com/SUDO-AI-3D/One2345plus
https://sudo-ai-3d.github.io/One2345plus_page/
https://github.com/bytedance/MVDream
https://mv-dream.github.io/
https://liuyuan-pal.github.io/SyncDreamer/
https://github.com/liuyuan-pal/SyncDreamer
https://www.xxlong.site/Wonder3D/
https://github.com/xxlong0/Wonder3D
The above Zero-shot generation models tend to be generated from a dataset of 3D objects, at the moment Objaverse-XL is the largest dataset of 3D objects being 10+ million in size, although this does include data from Thingiverse which has no color or textural information. (these are datasets of download links to free 3D content, not datasets of the actual content itself)
https://github.com/allenai/objaverse-xl
https://objaverse.allenai.org/
The Neural Radiance Field (NeRF):
https://github.com/NVlabs/instant-ngp
https://github.com/Linyou/taichi-ngp-renderer
https://docs.nerf.studio/
https://github.com/nerfstudio-project/nerfacc
https://github.com/eladrich/latent-nerf
https://github.com/naver/dust3r
(CPU NeRF below)
https://github.com/Linyou/taichi-ngp-renderer
https://github.com/kwea123/ngp_pl
https://github.com/Kai-46/nerfplusplus
NeRF to 3D Mesh:
https://research.nvidia.com/labs/toronto-ai/DMTet/
https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/dmtet_tu...
A lot of good resources can be found at: https://huggingface.co/
I've also written a Medium article which has a more wordy version of what I have written here with some image examples: https://james-william-fletcher.medium.com/text-to-3d-b607bf245031
If you are into the Voxel Art aesthetic you can Voxelize any 3D asset using the Free and Open Source Drububu.com Voxelizer or ObjToSchematic.
I maintain a project that allows users to create Voxel art in the web browser and export it as a 3D PLY file, called Woxel. It's like a simplified MagicaVoxel / Goxel but with a Minecraft style control system.
Although Woxel isn't the only voxel editor that runs in a Web Browser, there are more listed in my article here! And I have a more comprehensive list of Voxel Editors here.