OpenSail - open AI workspace platform
Self-hostable platform for building, running, and sharing AI workspace agents and apps with any model. No vendor lock-in - bring your own LLM provider or run local.
This entry doesn't have a long-form writeup yet. Follow the source link above for the full context.
Featured in
Related entries
Rapid-MLX - 2-4x faster local LLM inference on Apple Silicon
MLX-native inference engine with OpenAI-compatible API. The novel piece: DeltaNet state snapshots bring prompt caching to non-trimmable architectures (Qwen3.5 hybrids), restoring RNN state in ~0.1ms. 2-5x faster TTFT, native Metal kernels, continuous batching.
OpenYak - local BYOK alternative to Claude Code
MIT desktop agent for Windows/macOS/Linux with 20+ tools, MCP support, Ollama, and 100+ models via OpenRouter. No telemetry, fully local execution loop.
code-on-incus - per-agent isolated VMs with active defense
Gives each AI agent its own Incus machine with root, Docker, and systemd. Built-in detector stops threats automatically when an agent goes off-script.
sandstorm - run Claude agents in cloud sandboxes
FastAPI service for running Claude Code agents in secure E2B cloud sandboxes via API, CLI, or Slack. Single call, full agent, no infrastructure.