This week, I used the depth estimation model and referenced example code to assist with my own coding. I aimed to build upon this code by adding a color scheme to the depth estimation to create a unique color effect. I applied a sunset color scheme to vary the depth representation.
Compared to ml5, the processing of code took quite a long time, causing significant lag when I tried to edit the code or view its effects. Additionally, the logic is quite different, and to be honest, I’m still working to understand all of it. Personally, I prefer using ml5 with p5.js because it runs much more smoothly, and I understand the logic more clearly. However, transformer.js offers more possibilities for experimenting with text and generative aspects, such as text generation and speech-to-text. That said, I believe ml5 has greater potential and is more effective for facial and body detection, while transformer.js excels more in recognition tasks.
I build upon this example code https://editor.p5js.org/ima_ml/sketches/tGyF87f59
I normalized the depth values using a factor of 255 to improve the color mapping.
let normalizedDepth = depthValue / 255;
I then used the function lerpColor
to map color values to the depth. I had some trouble figuring out the steps for correctly mapping the colors, so I asked ChatGPT for help. 😢
if (normalizedDepth < 0.25) {
colorMapped = lerpColor(color(128, 0, 128), color(255, 0, 128), normalizedDepth / 0.25);
} else if (normalizedDepth < 0.5) {
colorMapped = lerpColor(color(255, 0, 128), color(255, 102, 178), (normalizedDepth - 0.25) / 0.25);
} else if (normalizedDepth < 0.75) {
colorMapped = lerpColor(color(255, 102, 178), color(255, 153, 51), (normalizedDepth - 0.5) / 0.25);
} else {
colorMapped = lerpColor(color(255, 153, 51), color(255, 223, 128), (normalizedDepth - 0.75) / 0.25);
}
Because the image I upload is larger than the canvas, I make the image to fit within the canvas.
Here is the final result:
My code: