Activepieces
AutomationOpen-source automation with AI-powered workflows
Connect Ertas with your favorite AI tools and platforms.
Open-source automation with AI-powered workflows
Use fine-tuned models with Aider's AI pair programming CLI
All-in-one AI desktop app for private knowledge bases
Microsoft's multi-agent conversation framework
Deploy custom AI behind Bolt-scaffolded apps
Open-source browser automation agent for any LLM
Visual workflow engine for AI generation pipelines
Connect fine-tuned models to Continue's open-source AI coding assistant
Multi-agent orchestration with role-based AI crews
Fine-tune coding models on your codebase patterns
Open-source platform for LLM app development
Serve fine-tuned models with ExLlamaV2's fast quantized inference
Visual LLM workflows with drag-and-drop simplicity
Enhance Copilot with fine-tuned models trained on your codebase
Privacy-first local AI by Nomic AI
Production NLP pipelines with fine-tuned precision
Self-improving open-source agent framework with persistent skills
Access thousands of open-source models
Your open-source local AI assistant
Lightweight GGUF inference for creative AI workflows
Build context-aware AI applications with fine-tuned models
Graph-based agent orchestration for production workflows
Stateful agents with persistent memory and infinite context
Blazing-fast inference on any hardware
Connect fine-tuned models to your private data
Run AI models locally with a polished desktop UI
Local model server with OpenAI-compatible API
Drop-in OpenAI API replacement powered by your fine-tuned models
AI app builder with fine-tuned model backends
Trigger fine-tuned model inference from Make.com scenarios
TypeScript agent framework on the Vercel AI SDK
Run fine-tuned models natively on Apple Silicon with MLX
Beautiful local AI chat with model management
Connect n8n AI agent nodes to Ertas-trained local models
Run models locally with one command
Self-hosted ChatGPT-like interface for local models
Power your AI agent with fine-tuned local models
Unified API gateway for any LLM
Optimize fine-tuned models for Intel hardware with OpenVINO
Cloud IDE with instant AI model deployment
Hugging Face's minimal code-action agent framework
Build production AI agents with fine-tuned reasoning
Complement Tabnine completions with fine-tuned domain models
Deploy fine-tuned models with NVIDIA's optimized inference engine
Feature-rich web interface for local LLM inference
Compare Unsloth CLI workflow vs Ertas Studio visual pipeline
Unified TypeScript interface to 3,300+ models from 94 providers
High-throughput production inference engine
Train domain-specific coding models for Windsurf's AI features
No-code AI automation for any business workflow