welcome to The Prompter #007!
this is a weekly newsletter written by krea.ai to keep you updated with the top news around prompt engineering and AI.
📰 AI news
new CLIP models are out!
LAION announced the release of three large CLIP models with OpenCLIP: ViT-L/14, ViT-H/14 and ViT-g/14.
CLIP models can measure similarities between a text and an image, and since they appeared in 2021 have been useful to create datasets, classifying images, or generate images from text with models like GANs, autoencoders, or diffusion models.
from the models released, is worth mentioning that ViT-H/14 has turned into the best open source CLIP model to date.
s/o to Romain Beaumont who trained H/14 and g/14 in the Stability AI’s GPU cluster, and to Ross Wightman who trained L/14 on JUWELS Booster supercomputer.
[LAION announcement][Romain’s announcement]
textual inversion in diffusers
textual inversion consists of a technique introduced in in An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion.
at a high level, this technique consists of creating new words that can describe a set of images.
once a new word (also known as pseudo words) is created, it can be used like any other word for prompting.
Huggingface recently announced that they now support this cool technique in their library Diffusers.
they also created the stable diffusion concept library, where you can see and use +350 textual inversions created by the community, super neat!
[diffusers][announcement tweet][SD concept library][example]
fast.ai stable diffusion course
Jeremy Howard announced a partnership with Stability AI to create a fast.ai course where they will be teaching stable diffusion from the foundations.
the fill course will be released for everyone in early 2023, but it will be possible to follow the classes as they are being recorded.
[link]
spawning
spawning is aiming to provide artist with ownership of their training data, allowing them to opt into or opt out of the training of large AI models, set permissions on how their style and likeness is used, and offer their own models to the public.
they released haveibeentrained.com, a website where any artist can check wether their artwork is within the dataset that Stability AI used to train stable diffusion.
great intiative, although they should have in mind that stable diffusion also uses a pre-trained CLIP model, which was trained with a private dataset from OpenAI.
[about]
🛠️ tools for prompting
AI texture generator for Blender by Antonio Freyre. [link]
Antonio Cao shipped his Figma plugin where he integrated stable diffusion. [link]
@proximasan shared a list with relevant links for anyone interested in prompting or using AI creatively, all curated by the parrot zone. [link]
Matt DesLauriers developed a tool to create color palettes from text descriptons. [link]
freepik now includes text-to-image generation services within their tool. [link]
canva announced the integration of text-to-image within their app. [link]
artmagic, is a new tool to turn child’s artworks into cartoons. [link]
🎨 AI Art
Scott Lighthiser combines Stable Diffusion with EBSynth to create this amazing (and creepy) video.
neat work by Thomas Hooper on applying stable diffusion on top of one of his 3D loops.
great work by CoffeeVectors on turning a quite simple human face render from Unreal Engine into this hyper-real-looking video.

Boldtron did an amazing work in this 100% AI-generated image for an album cover.


✨ about Krea
we fine-tuned a GPT-3 model to generate prompts and it worked way better than expected.
we released a new version of our product that integrates a new AI-powered image search, and enables to like and save prompts.

we’ll be announcing new features in our Discord, come say hi!