We've been working on expanding Womp Spark beyond its role as an assistant. We wanted to remove the friction between having an idea and having a 3D-printable object. That meant adding image and 3D mesh generation directly into the chat interface.
Shipping AI at Womp was no easy win. For us, AI isn't a gimmick or an optimization. It's about making 3D creation accessible to people who never thought they could — but also not leaving behind creators who want depth, not black-box magic.
The workflow was always the problem. Users would come to Spark with a concept, but to turn that concept into something physical, they had to leave the conversation, find another tool, generate content somewhere else, then figure out how to get it back into their scene. Each step added friction. Each step made it less likely they'd complete the loop from idea to object.
So we built generation directly into Spark. Now you can ask Spark to create an image, then immediately ask it to convert that image to a 3D mesh, then send that mesh to print — all in the same conversation. The content lives in your chat history, and you can iterate on it just like you would iterate on any other part of your design process.
Our goal wasn't to replace the tools power users love, but to add a new path into 3D for everyone. Generate objects instantly, then use Womp's editor to compose scenes, customize, and make it your own. Some creators were frustrated with our first AI release — and we're listening. Womp's mission is easy 3D for all, not shortcuts that limit control.
Spark can now generate images using several different models, depending on what you need. For most tasks, you don't need to think about which model to use — Spark handles that. But if you want to optimize for speed or quality, the option is there.
Free users get access to Flux Schnell (faster generation) and Flux Dev (better quality), plus Qwen Image Edit Turbo for context-aware editing. Pro users can also use Flux Pro, Nano Banana, GPT Image, and Flux Kontext Pro for more specialized use cases.
The models aren't ours — we're using off-the-shelf generation APIs. Your prompts and images aren't used to train anything. That was important to us from the start.
3D generation is where things get more interesting. You can generate 3D models from text descriptions, or you can convert any image into a 3D mesh. The key requirement is that everything Spark generates needs to be actually printable — not just theoretically, but optimized for SLA printing from the first generation.
Free users get access to Trellis3D, which handles most use cases well. Pro users can use Hunyuan3D-2, 2.5, 3 Turbo, or 3 depending on whether they want higher quality or faster output.
The tricky part isn't the generation itself — it's making sure what comes out the other end will actually print. We had to make sure our models were trained specifically to meet Womp's printing requirements. A 3D model that looks good in a preview but won't print isn't helpful.
Every generated image or 3D mesh comes with a set of action buttons right in the chat. For images, you can make 3D, upscale, remove background, or import to your scene. For 3D meshes, you can add to scene, send to 3D print, view the mesh, or download it.
The goal isn't to replace typing entirely. Sometimes you'll still want to describe what you want in natural language. But for common operations — "take this image and make it 3D" — a button is faster and clearer than typing out another instruction.
Generation costs money to run, so we built a credit system. Free users get 300 credits per day. Pro users get 12,000 credits per month. If you need more, you can buy additional credits.
One credit equals $0.001, so the math is straightforward. A typical image generation costs a few credits. A 3D mesh generation costs more, usually 20-50 credits depending on the model and quality level.
All of this lives in the same chat interface. Your conversation history, your questions, your generated images and 3D meshes — they're all in one thread. You can go back and reference earlier generations. You can iterate on previous outputs.
Spark also works alongside your active scene. It appears as a tab in your right sidebar when a scene is open, and you can generate content even while a scene is loading or paused. The goal was to make generation feel like a natural part of the design process, not like a separate tool you have to context-switch to.
The workflow problem isn't unique to Womp. Most 3D design tools require you to be an expert in both 3D modeling and whatever tool you're using. The learning curve is steep, and the tools are complex. We've been trying to lower that barrier, but generation in Spark is the first time we've had a real shortcut.
The recommended workflow is simple:
You can go straight from text to 3D if you want, but image-to-3D tends to give more predictable results. It's also easier to iterate on a 2D image than on a 3D mesh if something doesn't look right.
This update isn't the end point. It's a step toward making 3D creation more accessible. We're building AI tools that will help users create 3D content without needing to master complex tools. The complexity is still there — we're just hiding it behind a more intuitive interface.
The goal isn't to make 3D design easy. It's to make it possible for more people to participate. Generation in Spark is one piece of that. There's more coming.