Wastholm.com

qqqa is a two-in-one, stateless CLI tool that brings LLM assistance to the command line without ceremony.

The two binaries are:

  • qq - ask a single question, e.g. "qq how can I recursively list all files in this directory" (qq stands for "quick question")
  • qa - a single step agent that can optionally use tools to finish a task: read a file, write a file, or execute a command with confirmation (qa stands for "quick agent")

LDR is an AI research assistant that performs systematic research by:

  • Breaking down complex questions into focused sub-queries
  • Searching multiple sources in parallel (web, academic papers, local documents)
  • Verifying information across sources for accuracy
  • Creating comprehensive reports with proper citations

The Email Summariser is a Python script designed to automatically retrieve, process, categorize, and summarize emails using AI models. It leverages Gmail for email fetching and ollama for AI-powered categorization and summarization.

In at least some cases, models from all developers resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors. We call this phenomenon agentic misalignment.

Searle's paper, titled "Dazed & Confused: A Large-Scale Real-World User Study of reCAPTCHAv2," found that Google's widely-used CAPTCHA system is primarily a mechanism for tracking user behavior and collecting data while providing little actual security against bots. The study revealed that reCAPTCHA extensively monitors users' cookies, browsing history, and browser environment (including canvas rendering, screen resolution, mouse movements, and user-agent data) — all of which can be used for advertising and tracking purposes. Through analyzing over 3,600 users, the researchers found that solving image-based challenges takes 557% longer than checkbox challenges and concluded that reCAPTCHA has cost society an estimated 819 million hours of human time valued at $6.1 billion in wages while generating massive profits for Google through its tracking capabilities and data collection, with the value of tracking cookies alone estimated at $888 billion.

LibreChat AI is an open-source platform that allows users to chat and interact with various AI models through a unified interface. You can use OpenAI, Gemini, Anthropic and other AI models using their API. You may also use Ollama as an endpoint and use LibreChat to interact with local LLMs. It can be installed locally or deployed on a server.

A vector database indexes and stores vector embeddings for fast retrieval and similarity search, with capabilities like CRUD operations, metadata filtering, horizontal scaling, and serverless.

Generate images from within Krita with minimal fuss: Select an area, push a button, and new content that matches your image will be generated. Or expand your canvas and fill new areas with generated content that blends right in. Text prompts are optional. No tweaking required!

LLM provides a Python API for executing prompts, in addition to the command-line interface.

[...]

To run a prompt against the gpt-3.5-turbo model, run this:

import llm

model = llm.get_model("gpt-3.5-turbo")
model.key = 'YOUR_API_KEY_HERE'
response = model.prompt("Five surprising names for a pet pelican")
print(response.text())

This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.

1–10 (84)   Next >   Last >|