Wastholm.com

Graphite is a free, open source vector and raster graphics engine, available now in alpha. Get creative with a nondestructive editing workflow that combines layer-based compositing with node-based generative design.

Generate images from within Krita with minimal fuss: Select an area, push a button, and new content that matches your image will be generated. Or expand your canvas and fill new areas with generated content that blends right in. Text prompts are optional. No tweaking required!

Here’s how I made sense of responsive image content, progressing from simpler to more complicated — and then back to simple.

KanjiVG (Kanji Vector Graphics) provides vector graphics and other information about kanji used by the Japanese language. For each character, it provides an SVG file which gives the shape and direction of its strokes, as well as the stroke order. Each file is also enriched with information about the components of the character such as the radical, or the type of stroke employed.

It is very easy to create stroke order diagrams, animations, kanji dictionaries, and much more using KanjiVG. See Projects using KanjiVG for a growing list of applications of the KanjiVG data.

Simple command line tool for text to image generation using OpenAI's CLIP and Siren.

Gradient Magic is the largest gallery of CSS Gradients on the web, with new and exciting gradients added every day.

CSS Gradients are fancy patterns created via CSS, primarily used to add color or patterns to a website. They have many benefits over images, including being easier to work with and much smaller in size.

A target image is provided as input. The algorithm tries to find the single most optimal shape that can be drawn to minimize the error between the target image and the drawn image. It repeats this process, adding one shape at a time. Around 50 to 200 shapes are needed to reach a result that is recognizable yet artistic and abstract.

Hello! This is part one of a short series of posts on writing a simple raytracer in Rust. I’ve never written one of these before, so it should be a learning experience all around.

This is part one of a short series of posts on writing a simple raytracer in Rust. I’ve never written one of these before, so it should be a learning experience all around.

One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

1–10 (78)   Next >   Last >|