Blender can now generate visuals and effects based on text descriptions

AI art generators are used in 3D modelling applications as well. Stability AI has released a Stability for Blender application, which introduces Stable Diffusion’s picture-generating technology to the open-source 3D programme. You may generate AI-based textures, effects, and animations using either raw material from your renders or just a written description. To put the finishing touches on a project, you may not need to be (or employ) a competent 2D artist.

Blender requires an API (programming interface) key and an internet connection for stability, however, it is free to use. It does not require any software or a dedicated GPU. This might be useful if you need to finish some texture or video work on a laptop that isn’t as powerful as your primary workstation.

The feature presumably saves time and money while also perhaps streamlining your job. According to Steady, it may also help you create highly personalised content. It’s reasonable to suggest that if you’re already going to employ AI-generated art, this might save you time switching between applications and services.

This is unlikely to provide Stable Diffusion a significant edge over competitors such as OpenAI’s DALL-E. It will also not generate 3D things from scratch. For that, you’ll need a tool like POINT-E. But, it does suggest a method in which AI picture production might assist creatives while reducing the possibility of copyright difficulties. Blender’s stability may rely on your own work as source material – you shouldn’t have to worry about legal issues.