I stumbled across this post on Twitter: https://twitter.com/AndrewMayne/status/1511827454536474626
Or maybe it was another one, but ultimately it led me to discussion about an AI tool that’s in development called DALL-E 2 which basically can take a text-based description and create an image from it.
For example, that link above is for an image that was generated based on “a raccoon astronaut with the cosmos reflecting on the glass of his helmet dreaming of the stars” and it absolutely nailed it.
I just backtracked and what led me down that rabbithole was a post on Chuck Wendig’s page where someone had asked for an image of “a rabbit detective sitting on a park bench and reading a newspaper in a victorian setting” and that one is amazing, too.
The context in which that was shared was talking about the implications for designers or illustrators. If someone can generate an image like that in a minute, where does that leave designers?
But for me as a self-published author who prefers to do my own covers the majority of the time, I see the possibility of having illustrated fantasy covers at a lower price point, which would be fantastic.
Imagine being able to pay for a software that lets you say “dragon in the sky breathing fire with mountains” and you have that in a minute instead of six weeks where you’re trying to track down the designer who flaked on you.
I’m sure there will be limitations. Some of the examples I saw from a similar tool that was making the rounds a while back were amazing but most were not.
And I have no idea what they’ll charge for the tool or if there will be limits on commercial use. But if you’re trying to keep an eye out for future industry developments, I’d say this is one to watch.