Mariab
Proud Member
- Joined
- Nov 28, 2025
- Messages
- 34
- Points
- 18
Awh, bless! Likewise, I see so much talent in your work and in the snippets everyone else shared here! I am looking forward to seeing more and more! Thank you so much for your kind message!First of all: Love your work, I checked out your Insta profile.
And yes, we will see where AI takes us. I mean, it's a normal development that jobs disappear because of evolving technology. But it's new that this happens so quickly. I think in some years AI will be a great risk to different lines of creative work.
A big problem is, that many customers want things done quick and cheap. But in the creative field you need time to get ideas and develop them.
Interesting thing though with the contracts because how can they be sure no AI was used? That's basically impossible. You can just generate any picture and use it as a reference instead of creating everything yourself.
atm my job is safe, too, I also do some filming, so I guess AI will take some more years 'til it really is able to replace me. But in 10 years, who knows...
Keep up the good work, I hope you will post more drawings here, especially Michael of course![]()
Oh, this is a lengthy topic, and I am happy to get into it, but if any mods deem this appropriate for another thread, please let me know. Apologies for the elephant-like message below.
Before addressing the why's and how's of AI regulated use in the book-business, I want to preface this by saying that I am talking about generative AI. I am in no way referring to the medical AI, for instance, that relies on advanced learning to analyze health data and improve reading scans, detecting tumors, etc. There is a significant difference between traditional AI and generative AI. As per https://link.springer.com/chapter/10.1007/978-3-031-92973-1_6, Generative Artificial Intelligence represents an advanced class of models designed to create original content that reflects the characteristics of an initial dataset. It can be described as a model capable of generating human-like text (or image/video) in response to a prompt.
Now that we have a general definition, we can move on to where the problems lie. At first glance, AI seems like a magnificent tool to generate projects and inspire ideas; it's quick, it's cheap, it's accessible to the masses. So what is the problem? AI software by itself cannot generate a single pixel. It needs a comprehensive dataset on which to "feed", to "learn", to "imitate". The manner in which these datasets were created remains to this day highly unethical. That is because the datasets that AI feeds off of were put together by scraping content, regardless of whether or not this content was free to use; from artworks, photography sets, online books, music, everything was fed into AI without consent from their respective owners and without compensation.
Many people are under the impression that if you find a picture online (say, Pinterest, Google), it's free to use because you can save it/download it. "If it's online, it's free to use". Legally speaking, not everything available on the internet is free to use. In fact, many pictures, artworks, covers, photographs, songs are protected by copyright and their intellectual property belongs to their respective owners. This is why if you wish to publish a book, for instance, you can't use the same book cover as the Harry Potter books (even if you can find the artworks online and you can download them and print them). Those covers were paid for and are owned by the publishing house/the author/the artist, depending on the contract that was signed. Think Game Of Thrones promotional posters, think House Of The Dragon concept art; using, copying, distributing, displaying, or adapting a copyrighted artwork without the owner's permission violates the creator's exclusive rights to their work and consists copyright infringement.
And this is why AI is problematic and why their Terms&Conditions specify that AI-generated images cannot benefit from copyright. When you generate an image using gen-AI, that image is the result of multiple scraped artworks being merged together in a "Frankenstein" sort of way, therefore the end-result has no real owner and cannot be owned by anyone (legally). Generally, this also means you cannot monetize it or make profit off of it. Same goes for AI-authors; when you generate a story with AI, the software merges together storylines and paragraphs from the books and texts that were fed into the AI dataset (Now, of course, the end result is very unpolished and needs substantial editing, but you get what I mean - AI did not create something original, it melded together texts it already knew).
Now if you're an author or a publisher, not only do you not want to risk legal issues over your books in the future (new laws that regulate AI are expected to emerge sometime in the future), but you also want to make sure that you can legally prevent other people from copying your book cover, your book's content, etc. This is why many stipulate a strict No-AI clause in their contracts.
We finally arrive to how one can check whether someone used AI in order to generate an artwork. The most obvious course of action is finding the artwork in an AI-generated portfolio (people forget that when you generate an AI-image, that image gets stored and can be repurposed/reprompted). In addition to that, though AI is definitely getting harder to spot, a professional can still pin-point mistakes and inconsistencies only AI makes. What do I mean by that? I'll give a more obvious example: I remember being asked whether an image was AI-generated or not, and at first glance, the regular "tells" were not there (there were no extra fingers, no absent fingers, no hair strands melding into the skin or clothes, no difference between the size of the pupils, no glaring anatomical discrepancies, no blurriness/pixelation in awkward places and random sharpness in others). However! The light was hitting the face of the character from the right, the torso had light shine upon it from the left, the legs had light shine on them from the back, and the hair curls seemed to be growing out of the back of the neck instead of the head (this is just a random example). These are mistakes even rookies do not make (most artists begin by drawing from references or tracing references, and all these non-AI materials contain consistent sources of light). I can elaborate more on this subject if anyone is interested, but I think you get my point. Think of the trained professionals like bank tellers, law enforcement, and cash handlers who can spot fake money; the regular person might not be able to tell that the dollar bills in their hands are fake, but these professionals can because they know what to look for. It's the same with artworks.
In my opinion, the easiest way to make sure AI is not used in the creative process is to evaluate the process itself by requesting & offering frequent and regulated work-in-progress updates (wips). In my case, when I sign a contract, I inform my client of when to expect updates, I explain when certain creative changes can be made, I record my drawing process (not just a timelapse, I actually set up my camera to allow my upper body to be seen as I am working on the artwork), and I even offer my clients the possibility of watching me work live on their commission (I've been using Discord lately); this enables them to see how I draw everything by hand, they can ask questions, they can request live changes (changes that Midjourney, for instance, has a hard time replicating). There are also things that you cannot really teach AI (at least, not right now), things like colour theory, how to visually show a character's personality or identity through proportion, perspective, physical features (right now, AI has trouble displaying proper diversity when it comes to different races and ethnicities).
I will also say that the contract between artist and client in itself is a very good deterrent because should legal questions arise over the authenticity of the artwork and the artwork in question ends up proven to be AI, the "artist" in question will have some *hefty* sums to pay (unless they can prove the authenticity of their work). In my case, my personal contract includes a "Warranty Of Originality".
As for me, I strongly believe that AI *can* be an amazing creative tool, if only they could find a way to make the datasets ethical. For those who are not in the business, when you commission an artist, if you intend to make money off of the artwork they draw for you, you will need to purchase a commercial license from the artist (exclusive, non-exclusive). If a client purchases a commercial license, that gives them the right to make profit off of the artwork, while the copyright is still maintained by the artist. There is another option available, which is the purchase of FULL copyright. Or you can enter into a "work for hire" contract with the artist. What this means is that the client will retain ALL rights over the artwork and they may do with it as they please. In order to not infringe on copyright, this is what the AI companies should have done: reach out to artists, authors, musicians, enter into a "work for hire" contract with them, and there you have it: all these creators would have generated data for AI in an ethical, consensual and properly compensated manner. Sure, this would have been costly and would have taken longer, but seeing as all these creators spent years learning and mastering their craft, it is quite unfair to take their work without consent and use it to generate whatever (I've seen lovely creators had their styles stolen to be used even for p0rn advertisements - something a lot of people don't want associated with their work). I've had my own art stolen and fed into AI and what the AI users didn't know was that most of the work displayed in my online portfolio is contracted work, I have very few personal drawings out there (which means that these users probably had to deal with the publishers' legal teams).
I definitely think AI is part of the future, but I do wish it would have not "taken over" everything in such a parasitic way (almost every major app and platform tries to push AI to the front). I think that technologically and medically, AI has impressive use, it speeds up the processes of finding new molecules, it helps determine the pharmacodynamics and pharmacokinetics of drugs, it improves radiology, etc.
However, for the creative fields? It's an unpopular opinion, I know, but I will echo the words of the great Hayao Miyazaki:
Art is carved from pain, but AI knows only patterns. Miyazaki and artists like him have spent lifetimes pouring their souls into handcrafted masterpieces, each frame a reflection of their experiences, emotions, and relentless dedication. To compare AI generated work to such artistry is to strip away the very essence of what makes human creativity profound. AI at its core is an algorithm analyzing, replicating, and generating based on patterns, but never truly creating in the way an artist does. It does not suffer, dream, or persevere through failure. AI can only be a tool used to mimic or replicate but it can never undermine true art.
True art isn’t just about the final image, it’s about the journey, the struggle.
We see how Michael's very experiences and emotions shaped his music, infused it with his soul. In my opinion, AI cannot do that.
I want AI to do the menial work, while we get to do the creative work, not the other way around.
I will add below a clip of Miyazaki's perspective, as well as award-winning Laura Rubin's demonstration of how she takes real life to paint high realism.
Last edited: