Projection Mapping with Eden
Nov 12
4 min read
3
42
0
A new Eden user (@sstillwell) asked about how they might approach a projection mapping project with Eden tools. This is something I've done several times for both personal and commercial projects of mine, and wanted to take some time to outline my approach, as well as tips, tricks and best practices for working with these tools towards the goal of tightly coherent projection mapping content.
They sent a few images of sculptures they've mapped that we can use to demonstrate.
The first thing I did was clean up the images and prepare them for Stable Diffusion workflows with Photoshop, removing obstacles and isolating the sculptures.
Since SDXL (our best tool for using controlnet guidance) likes images that are about one megapixel in size, I resized the images to 1024x1024 for square images, and 1344x768 for 16:9 images. I also used Lightroom to aspect correct the perspective of the images to get them as close to straight-on as possible, providing a nice clean map for guidance.
For projection mapping, its often useful to create masks for our images so that any "Coloring Outside the Lines" by AI is ignored when you go to map your surface. This is a great time to do this as well, while you're working with an outside tool like Adobe Suite.
Since the second sculpture has multiple parts, I also seperated the central triangle sculpture to make high res content for just that piece of the mapping. I added 50% grey to the backgrounds to have a neutral starting point for SD denoising.
Now we're ready to start using these images as guidance for Eden tools, and the experimentation can begin!
My first step here is the fun part, playing with the tools using these guides as inputs. Pure img2img workflows begin with uploading your projection map as a starting image, and turning down the denoise parameter, called "Generation Strength" on Eden.
For SDXL image generation, a guidance strength of .5-.75 will retain a lot of the input, but the more AI influence, the more likely that our image won't maintain the exact strength. Flux generations via our Create an image (Advanced) tool like a little more guidance, .7-.85 are great starting points to try.
Beyond pure image 2 image, one could also use the input image as guidance via "controlnet", a framework that applies a preprocessing step to the input and steers the output towards this shape. "Canny" is an "edge-detection model, whereas "depth" creates a luminance depth map the output will cohere to, and "lineart" is a thicker more forgiving line detector than the canny model.
! Tip: Clicking on the numbered box on your creation will expand the parameters used to make it, and reveal any intermediate outputs like controlnet preprocessing guides.
Experiment with different controlnet passes to create images that closely adhere to the input guidance, resulting in compositions that will tightly map to your projection surface
From still images alone, one can get a lot of nice content. Experiment with the collection of controlnets and strength parameters, try out different prompts and curate your best results into a collection. You can even upload some of these outputs to the Remove Background tool to get a nice isolated png with alpha. Milage will vary on that depending on the image you use, but its a great time saver when it works.
Running the images with background removed through the "Animate an Image" tool can add motion to your still images, and the removed background will help it cohere tightly to the shape of your projection sculpture.
You can also use multiple still images with the frame-interpolation endpoint to interpolate between your favorite creations. Applying the mask image to the video output will make sure that even if the AI creation colors outside the lines, your shape will tightly cohere to your projection map.
Sending the masked image to Runway Gen3a Turbo tool will be hit or miss with coherence to the image shape, but if you mask it with the input can still produce some interesting results. Here are the raw and masked outputs of a Gen3 creation:
Finally, the TextureFlow endpoint is probably our strongest tool for making video content that adheres closely to an input guidance shape. This tool takes a shape input (controlnet), which could be your original image, a mask, or even a still generated output from one of our other text 2 image Creation tools, and applies several style images that it will interpolate through guided by a choice of motion mask shapes.
If you apply the mask created in our preprocessing step at the beginning, you'll have very nice motion content that is ready to go for your projection mapping project.
I've curated an Eden collection of creations that I made while assembling this guide; feel free to explore the parameters and click "Use As Preset" in the ... menu to remix them with your own input guidance images.
Assets made along the way for this demonstration are also here if you'd like to experiment with them yourself.
As always, these tools are available to play with at Eden.art, and our team and other users are quick to respond with questions or help on our Discord server.
Have fun, and happy creating!
Nov 12
4 min read
3
42
0