Làm thử AI phỏng vấn xem sao
- TypeScript 73%
- Python 15.2%
- Shell 5.8%
- CSS 5.4%
- Dockerfile 0.3%
- Other 0.3%
Backend changes: - Add language-specific system prompts for LLM (vi/en/auto) - Add TTS voice switching based on language selection - Add auto language detection from text for TTS in auto mode - Fix health check attribute after TTS refactor - Update WebSocket handler to propagate language to all services Frontend rebuild: - Migrate to Next.js 16.1.3 with React 19 - Add 3D avatar with React Three Fiber (professional 3/4 view) - Implement VAD-based voice detection - Add language selection in setup panel (Vietnamese/English/Auto) - Add real-time processing indicators - Build reusable UI component library Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| backend | ||
| docs | ||
| frontend | ||
| models | ||
| .env.example | ||
| .gitignore | ||
| CLAUDE.md | ||
| docker-compose.yml | ||
| README.md | ||
| select-models.sh | ||
Yuna - Local Voice AI Interview
100% offline voice AI interview practice with 3D avatar.
Tech Stack
- Backend: FastAPI + Pipecat
- STT: faster-whisper (local)
- LLM: Ollama (qwen3:4b)
- TTS: Piper (local)
- Frontend: Next.js + TalkingHead.js (planned)
Quick Start
# Backend
cd backend
uv sync
source .venv/bin/activate
uvicorn src.main:app --reload --port 8765
Requirements
- Python 3.12
- Ollama with qwen3:4b model
- Piper voice models in
models/piper/