This started with a text prompt like a claw machine with dogecoins and Mr. Meeseeks dolls. The prompt goes to a neural network to generate some crude reference designs. A combination of technologies is used to convert words into images simplifying the initial design phase. Note that this process is accessing high-end graphics cards in the cloud to render the images. The process is repeated a few times with varied starting parameters to give different final results. Then the designs are sent to a person to further refine into a 3D model. With a model, it's possible to fabricate real-world versions using additive and subtractive methods. Using a 3D printer or a CNC router, components can be quickly created. The point is AI is not only needed downstream in carefully constructed scenarios but also upstream as ready-to-use tooling.
See more 3D models from Daniel Ferro.