The Gathering!




Halloween Gift Page

Halloween Gathering has begun! You can read all about it HERE.

Participants:
in my little paws

Petege
Aelin
Llola Lane
emanuela1
Hipshot
prae
Nemesis
BoReddington
EarwenM
panthia
sanbie
GlassyLady
shadow_dancer
Disparate Dreamer



 :ghost: :ghost: :ghost:

Chat Box

Halloween is coming!

McGrandpa

2025-10-10, 01:04:27
Hey Zeus FX, welcome back!Great job to Dark Angel, she swatted the heck outta some gremlins! :peek: :Hi5: :woohoo:

Zeus Fx

2025-10-09, 13:07:22
Hello everyone. It is good to be back

Hipshot

2025-10-02, 08:51:51
 :gday: Sounds like the gremlins have once again broken loose.   Think we need to open the industrial microwaves.   :peek:

Skhilled

2025-10-01, 18:54:22
Okey, dokey. You know how to find me, if you need me.  :gday:

DarkAngel

2025-10-01, 17:18:59
nopers just lost a bit

Skhilled

2025-09-30, 20:07:14
DA, Are you still locked out?

DarkAngel

2025-09-29, 15:34:23
Hope site behaves for a bit.

McGrandpa

2025-09-29, 14:04:22
Don't sound so good, Mary!

McGrandpa

2025-09-29, 14:03:44
My EYES!  My EYES!  Light BRIGHT Light BRIGHT!

DarkAngel

2025-09-27, 17:10:12
I locked me out of admin it would seem lol

Vote for our site! 2025

Vote for our site daily by CLICKING this image:




Then go here: to post your vote.


Awards are emailed when goals are reached:

Platinum= 10,000 votes
Gold= 5,000 votes
Silver= 2,500 votes
Bronze= 1,000 votes
Pewter= 300 votes
Copper= 100 Votes




2025 awards

.

2024 awards
   

Attic Donations

Current thread located within.


All donations are greatly needed, appreciated, and go to the Attic/Realms Server fees and upkeep


Thank you so much.

@ FRM




Shop Our latest items!
Members
Stats
  • Total Posts: 96,707
  • Total Topics: 10,120
  • Online today: 1,194
  • Online ever: 5,532 (March 10, 2025, 02:26:56 AM)
Users Online

Giveaway of the Day

Giveaway of the Day

Using AI in your "traditional" artworks

Started by parkdalegardener, January 03, 2023, 07:08:18 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

parkdalegardener

Because I'm bored and have to wait around here for my lift to arrive; I'll sneak peek something I started working on. I've posted to the coffee shoppe a while back when I was first exploring this stuff. I was complaining about coherency and how hard it was to attain something passable. That was before depth mapping.

So the way this works is simple:

Use your video tool of choice to take your clip and break it into individual frames. I use the free ffmpeg. It doesn't matter what you use as long as you name the frames with sequential numbers and dump them all into a file folder somewhere.

Take a frame near the start of your video and upload it into the AI. Use the depth mapping model and describe your expected output. Hit generate and examine the output. If the description needs adjusting to get the desired outcome so be it. Adjust the prompt and try again. When you are happy with the image you are generating copy the seed as shown.

Illegitimus non carborundum
don't let the bastards grind you down

parkdalegardener

Outpainting is the cool thing. There are various implementations available to try online but the basic idea is the same in all implementations. You get the AI to imagine what your image would look like if the canvas was extended in a given direction. We will take our boy king with the bright blue eyes and drop him in to inpainting/outpainting and ask the AI to extend the canvas upward and fill in the top of our pharaoh's headpiece.Et Viola, our young lad has had a top added to his headpiece. If you don't like it generate another. Changing the denoising will change the result. Outpainting has no size limitations in theory. It most certainly does in practice. The method you chose to do the outpainting and the machine doing the actual outpainting; either CPU or GPU will have RAM, VRAM and or other sizing limitations. If we sent our young contest entrant back thought the outpainting we could finish off each side of the headpiece and maybe bring down the bottom a few rows of pixels to get a more pleasing aspect ratio for our image.

Remember; you can go through img2img, depthmapping, in/outpainting and more over and over again with the help of your paint program refining your composition and details as you go. You can use them in any order you like or not use them at all. The great thing is that all of this stuff can work with your own images and designs. You can ask AI for an image and accept that as the output, but in reality it is more a tool to help with the creativity of the person using it. You will achieve way more artistic satisfaction and understanding of the limitations of the technology once you actually play around with it using AI as a creative tool and part of your workflow instead of letting AI image generation be the end result in and of itself.
Illegitimus non carborundum
don't let the bastards grind you down

parkdalegardener

You can paint out the eyes in your paint program and upload the image. You could upload a mask with the eyes removed to place over the original image. I simply painted vague holes in the original image itself right in my web browser. Tell the AI you want bright blue eyes, or red bloodshot eyes, or green snake eyes. What ever you want. Hit the button and away you go. If you hit the button again it will give different bright blue eyes in the case of our mini Pharaoh.
Illegitimus non carborundum
don't let the bastards grind you down

parkdalegardener

Inpainting and outpainting are exactly like they sound. Painting inside an image, much like any paint program; or painting outside the borders of your original image. The first is pretty self explanatory. Take your image from what ever the source and paint adjustments into it via a transparency mask. This is the important part for both inpainting and outpainting. Your paint program must paint nothing where you want it to be transparent. That is one reason I use paint.net. Transparent is just that. Transparent. Some image editors/paint programs use black as a transparency colour. This won't work. Remember our depth masks from earlier. They are made of various amounts of opacity.

So here we go: Let's start with a young Pharaoh for our Egyptian themed contest entry. Here he is in glorious B&W. Now to illustrate a point we will give him striking baby blue eyes. He is a baby after all. We will send our child god image over to inpainting. It could just as easily be a Halloween photo I'm inpainting. Many of the online generation sites will let you inpaint and outpaint with images of your own. They do not need to be AI generated images in order to do this.
Illegitimus non carborundum
don't let the bastards grind you down

parkdalegardener

I gave the AI almost carte blanche on that one. Usually I would work with much more control over the output by keeping the noise closer to zero and sending the results back over and over again for refinement till I got something closer to my original design. My lovely camels facing the other way.

img2img is just part of the whole pipeline to use in combination with inpainting and outpainting to achieve exactly the image you want. Those two things are up next.
Illegitimus non carborundum
don't let the bastards grind you down

sanbie

Wow, that's unbelievable pdg, just by doing that you got that image, wow, just wow  :allhail:

parkdalegardener

Let's just toss this into img2img and tell the AI to do it's thing. I hit it pretty hard. See that Denoising strength slider. That's the "do your own thing" slider. The closer to 0 it is; the more the AI has to listen to what I say. In fact, at 0 the image would not change at all. If I set the slider to 1 I'm saying get as creative as you want. 0.95 is telling the AI I want a lot of creativity in it's interpretation of my lovely camels.

Well damn; this came out better on the first pass than I thought. Camels are facing the wrong way but the rest followed along fairly well. I can live with it. At this point if you wish you can add a depth map from a completely different composition such as a figure posed in a 3D program and a depth map saved from the project. Remember, it doesn't matter the sex, age, or clothing your figure was wearing; we've seen that we can change that at will.
Illegitimus non carborundum
don't let the bastards grind you down

parkdalegardener

Img2img can add detail to an image via depth maps, but it is way more powerful than that. Time to break out paint.net.

I'm going to start another Egyptian themed contest entry. How about a nice caravan of camels crossing the desert? Maybe a few pyramids in the background, and a hot cloudless desert sky. Sounds good; but I don't have a camel figure in my Poser runtime and I'm too slack to model and texture a pyramid in Blender. Why bother? I don't have a camel after all. I do have paint.net. Ok screw it. Paint the top half of a square blue sky. Paint the bottom of the square brown sand. Two yellow triangles for the pyramids. This is going fine. Took less time to paint the background than to type the process. Now for the camels. Three black camels. Beautiful darlings aren't they?
Illegitimus non carborundum
don't let the bastards grind you down

parkdalegardener

As you may have noticed by now; stable diffusion has absolutely no idea of Egyptian deities. Horus, Ra, Seth, and so on are all just pharaohs. The mask still needs a bit of work before it's ready for prime time but I can paint a hawk god in place of the statues supplied in my paint program.

If I was starting with a photograph I took on a backpack through Africa trip when I was a teenager, and wanted to use it to supply the background for a separate composition for a contest entry; then the masking to get rid of the foreground would be even more time consuming as you have no depth map to start with.

The MiDaS model was trained for autonomous driving after all so that's the secret. It inputs a 2D colour image from a camera, and as quickly as possible; separates everything it sees from it's surroundings, and then determines how close each object is to becoming a bumper smear and raising your insurance rates.

So MiDaS becomes almost magical for artists. When we make a mask in a paint program we use magic wands and bucket fills, while we enlarger everything so big in order to see each pixel we still leave artifacts along the edge of the mask. MiDaS uses AI to automatically generate more accurate masks than we could using simple descriptions of what we want masked. We don't just didn't see the mask as it's display would just slow everything down.
Illegitimus non carborundum
don't let the bastards grind you down

sanbie