The Gathering!




Halloween Gift Page

Halloween Gathering has begun! You can read all about it HERE.

Participants:
in my little paws

Petege
Aelin
Llola Lane
emanuela1
Hipshot
prae
Nemesis
BoReddington
EarwenM
panthia
sanbie
GlassyLady
shadow_dancer
Disparate Dreamer



 :ghost: :ghost: :ghost:

Chat Box

Halloween is coming!

McGrandpa

2025-10-10, 01:04:27
Hey Zeus FX, welcome back!Great job to Dark Angel, she swatted the heck outta some gremlins! :peek: :Hi5: :woohoo:

Zeus Fx

2025-10-09, 13:07:22
Hello everyone. It is good to be back

Hipshot

2025-10-02, 08:51:51
 :gday: Sounds like the gremlins have once again broken loose.   Think we need to open the industrial microwaves.   :peek:

Skhilled

2025-10-01, 18:54:22
Okey, dokey. You know how to find me, if you need me.  :gday:

DarkAngel

2025-10-01, 17:18:59
nopers just lost a bit

Skhilled

2025-09-30, 20:07:14
DA, Are you still locked out?

DarkAngel

2025-09-29, 15:34:23
Hope site behaves for a bit.

McGrandpa

2025-09-29, 14:04:22
Don't sound so good, Mary!

McGrandpa

2025-09-29, 14:03:44
My EYES!  My EYES!  Light BRIGHT Light BRIGHT!

DarkAngel

2025-09-27, 17:10:12
I locked me out of admin it would seem lol

Vote for our site! 2025

Vote for our site daily by CLICKING this image:




Then go here: to post your vote.


Awards are emailed when goals are reached:

Platinum= 10,000 votes
Gold= 5,000 votes
Silver= 2,500 votes
Bronze= 1,000 votes
Pewter= 300 votes
Copper= 100 Votes




2025 awards

.

2024 awards
   

Attic Donations

Current thread located within.


All donations are greatly needed, appreciated, and go to the Attic/Realms Server fees and upkeep


Thank you so much.

@ FRM




Shop Our latest items!
Members
Stats
  • Total Posts: 96,709
  • Total Topics: 10,120
  • Online today: 1,194
  • Online ever: 5,532 (March 10, 2025, 02:26:56 AM)
Users Online

Giveaway of the Day

Giveaway of the Day

Using AI in your "traditional" artworks

Started by parkdalegardener, January 03, 2023, 07:08:18 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

parkdalegardener

Pretty straight forward. Your image and a grey scale copy that shows how close to, or far away from; the viewer a thing is. The big statue is close to us in the foreground. Another big statue is in the midground, and some pyramids are in the background.

If we take our depth map and darken up the contrast in paint.net or whatever your paint program/image editor of choice may be; and paint a little black where we want everything to be left alone, we can mask out our two statues from the background.
Illegitimus non carborundum
don't let the bastards grind you down

parkdalegardener

Let me try this.

I want to make a picture in my 3D software for the current contest. A couple of statues of the god Horus guarding a path to some pyramids in the distance. One of the options when I render my image is to render a depth map as well. This is true of Blender, Poser, and I am pretty sure; DS.  If I rendered my picture and the depth map and put them side by side they would look like this.
Illegitimus non carborundum
don't let the bastards grind you down

parkdalegardener

For folk that use a 3D program to create art; there is a depth map render pass that we are all familiar with, though not all actually have a use for it. For people that have 2D art like a photograph or a render of 3D artwork and have no such map for a particular scene; MiDaS can generate one.

Stable Diffusion, dall-e, Midjourney, Wombo, or what have you all have two parts. The first part works by turning words into tokens that represent those words; and the second part then combines the tokens hopefully into something resembling what the user requested. Results are not guaranteed by any stretch of the imagination. That's why it's hard to generate similar images across online services. Each wishes to massage the data behind the scenes. The results from Disco Diffusion look different than Midjourney and so on even though they do the same thing using the same basic model to start with.

Some of the online generation services allow you to choose from the original model files (stable diffusion by Stability AI and partners) and others. I used a MiDaS model to illustrate how powerful depth mapping with this model is. I changed the entire character from cat to dog and outfits from one type to another as easily as shown. Yes image editors and paint programs can do it; but with great effort and skill. You can use auto-masking in Photoshop and other programs. Exactly the same type of thing but you are not training Adobe to get smarter simply by using their ai software.

In fact Adobe now sells an AI plugin that is stable diffusion model. There are also free plugins to use stable diffusion in Blender or the open source Krita. All three can use the MiDaS model. It's kind of auto-magical. I didn't have to generate a mask for anything. Clothing change for instance. I could have left the animal unchanged but the clothing recoloured.

All this is getting more confusing no doubt but it does lead somewhere. I'll have to explain depth mapping with different images. I'm going to get my act together here and be back in a bit.
Illegitimus non carborundum
don't let the bastards grind you down

sanbie

Boy that just went right over my head lmao
Will have to read it at a later date in the hope it goes into this old brain lol

parkdalegardener

Couldn't add the second image for some reason
Illegitimus non carborundum
don't let the bastards grind you down

parkdalegardener

So; we've seen we can use AI for making textures and the normals for them.

We can make any environment, though getting perfect environmental lighting is more specific to the software you want to use the environment in.

We can make any background plate we need.

Not really anything you couldn't do in a good paint program. That's the point. AI is simply a tool. Just like Poser or photography or paint.net.

So what else can we do with it? Most of us are familiar with depth maps or z-depth mapping or f-stops in photography. 3D artists have an option to deal with them in their software. Usually they are used to blur out a background or soften near foreground objects. They can even be used to mask out unwanted parts of an image.

2D artists have had no access to such, and after the creation of an image; neither do 3D artists.

Enter MiDaS. MiDaS is an AI model that estimates monocular depth. Huh? It guesses how close or far away something is from a camera. It was developed for autonomous driving. We will use it a little differently. We will use it to estimate the depth in a 2D image.

For instance, I can change out a character at the last second without changing my composition. I generated a portrait of a cougar bard for a game character. At the last second the character wanted to be a fox warrior instead. I'll show you the generation of the bard and then the change to warrior.

The bard image does not have to come from Stable Diffusion. I just show the generation for clarity on the whole thing. In the upper left of the screen you see I used the 2.1 768 model to generate the original image. I then sent my cat picture to image to image tab.

In the second screen grab you can see I have changed to the MiDaS depth mapping model. It has a smaller base size of 512x512 in comparison to the larger 768x768 model used to generate our cat picture.

What I am attempting to illustrate is that the origin of the picture you start with doesn't matter. Just upload it into img2img. Make sure you are using the MiDaS depth mapping. Tell the AI what you really want is a warrior fox. The AI will give you a warrior fox in the same pose (mostly) keeping as much coherence as possible. It works the other way as well to replace the background should you wish.


Illegitimus non carborundum
don't let the bastards grind you down

parkdalegardener

I haven't "cherry picked the results" as they say. Straight up output from the prompt just to show the workflow and possibilities. If I want to really refine a request (prompt) I run a grid similar to the "contact sheets" and "exposure strips" from the old days of analog print film and darkroom work.

It's not the same thing as the mass generation of images I put out for the Egyptian textures. That is the point and spray technique. Roll the dice and take your chances. A very crude but effective method of generating a bunch of textures all at once.
Illegitimus non carborundum
don't let the bastards grind you down

Aelin

I didn't know about the second AI you talked about for seamless files. Since the time I tried to convince Stable Diffusion (online and with only 4 choices at time) to do seamless work!
:ty:
Unfortunately, the egyptian motifs weren't as good as yours when I tried :tearlaugh: It was more a kind of "went in washing machine" :tearlaugh:
********
Check FRM for great products

parkdalegardener

Illegitimus non carborundum
don't let the bastards grind you down

thelufias

I've got a feeling this room will get a lot of visitors PDG....Thanks for doing this...