Wastholm.com

LibreChat AI is an open-source platform that allows users to chat and interact with various AI models through a unified interface. You can use OpenAI, Gemini, Anthropic and other AI models using their API. You may also use Ollama as an endpoint and use LibreChat to interact with local LLMs. It can be installed locally or deployed on a server.

A vector database indexes and stores vector embeddings for fast retrieval and similarity search, with capabilities like CRUD operations, metadata filtering, horizontal scaling, and serverless.

Generate images from within Krita with minimal fuss: Select an area, push a button, and new content that matches your image will be generated. Or expand your canvas and fill new areas with generated content that blends right in. Text prompts are optional. No tweaking required!

LLM provides a Python API for executing prompts, in addition to the command-line interface.

[...]

To run a prompt against the gpt-3.5-turbo model, run this:

import llm

model = llm.get_model("gpt-3.5-turbo")
model.key = 'YOUR_API_KEY_HERE'
response = model.prompt("Five surprising names for a pet pelican")
print(response.text())

This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.

Abstract We introduce MusicLM, a model generating high-fidelity music from text descriptions such as "a calming violin melody backed by a distorted guitar riff". MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and it generates music at 24 kHz that remains consistent over several minutes. Our experiments show that MusicLM outperforms previous systems both in audio quality and adherence to the text description. Moreover, we demonstrate that MusicLM can be conditioned on both text and a melody in that it can transform whistled and hummed melodies according to the style described in a text caption. To support future research, we publicly release MusicCaps, a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts.

Simple command line tool for text to image generation using OpenAI's CLIP and Siren.

An open source research project exploring the role of machine learning as a tool in the creative process.

GPT-Neo is the code name for a series of transformer-based language models loosely styled around the GPT architecture that we plan to train and open source. Our primary goal is to replicate a GPT-3 sized model and open source it to the public, for free.

This text gives an overview of Gödel’s Incompleteness Theorem and its implications for artificial intelligence. Specifically, we deal with the question whether Gödel’s Incompleteness Theorem shows that human intelligence could not be recreated by a traditional computer.

1–10 (79)   Next >   Last >|