/chat/opengraph-image.png)
An Open-Source AI Chatbot Template Built With Next.js and the AI SDK by Vercel.
Features · Model Providers · Deploy Your Own · Running locally · Extra Features · Future Roadmap
- Next.js App Router
- Advanced routing for seamless navigation and performance
- React Server Components (RSCs) and Server Actions for server-side rendering and increased performance
- AI SDK
- Unified API for generating text, structured objects, and tool calls with LLMs
- Hooks for building dynamic chat and generative user interfaces
- Supports xAI (default), OpenAI, Fireworks, and other model providers
- shadcn/ui
- Styling with Tailwind CSS
- Component primitives from Radix UI for accessibility and flexibility
- Data Persistence
- Vercel Postgres powered by Neon for saving chat history and user data
- Vercel Blob for efficient file storage
- NextAuth.js
- Simple and secure authentication
This template ships with xAI grok-2-1212
as the default chat model. However, with the AI SDK, you can switch LLM providers to OpenAI, Anthropic, Cohere, and many more with just a few lines of code.
You can deploy your own version of the Next.js AI Chatbot to Vercel with one click:
You will need to use the environment variables defined in .env.example
to run Next.js AI Chatbot. It's recommended you use Vercel Environment Variables for this, but a .env
file is all that is necessary.
Note: You should not commit your
.env
file or it will expose secrets that will allow others to control access to your various AI and authentication provider accounts.
- Install Vercel CLI:
npm i -g vercel
- Link local instance with Vercel and GitHub accounts (creates
.vercel
directory):vercel link
- Download your environment variables:
vercel env pull
pnpm install
pnpm dev
Your app template should now be running on localhost:3000.
- Appropriate RTL support
- Message editing and conversation branching
- Support for different model providers
- Dynamic model list fetched from database instead of hard-coded options, with user customization
- Support for different modalities
- Image generation
- DALL-E
- Leonardo AI (Coming soon)
- Voice mode (WIP)
- Image generation
- Infinite scroll pagination for chat history
- Authentication with phone number + OTP verification
- User limit (WIP, currently static limit)
- Telemetry and analytics
- Using Clickhouse and Langwatch
- SMS OTP rate limit
- Model, provider and global rate limits
- PDF (and other file types) support
- Models that support it out-of-the-box will receive the file
- Models that don't support it will receive a converted text version using Markitdown
- Image PDFs can be converted to text using OCR models
- File manager (to avoid re-uploads)
- Input area notifications
- I18n
- Organization, teams and projects
- Custom themes and color palletes
- Pro mode: a toggle to add more features for power users
- Token count
- Parameter tuning (e.g. temperature)
- Claude and Gemini cache control
- Setting your own API key