title: Grammo
emoji: π
colorFrom: purple
colorTo: yellow
sdk: docker
pinned: false
license: gpl-3.0
short_description: AI Translation and Grammar Correction
Grammo Backend
Django REST API backend for Grammo, an AI-powered translation and grammar correction service.
Overview
The Grammo backend provides a RESTful API for translation and grammar correction services. It leverages LangChain and HuggingFace models to process language requests, with LangGraph managing conversation state across sessions.
Features
- π Translation Service - Natural, contextually appropriate translations between languages
- βοΈ Grammar Correction - Fixes grammar, spelling, and punctuation errors
- π¬ Session Management - Maintains conversation context using Django sessions and LangGraph checkpoints
- π Customizable Modes - Supports Default and Grammar modes
- π¨ Tone Control - Configurable tone (Default, Formal, Casual) for responses
- π Security - CORS support, CSRF protection, secure session management
- π¦ HuggingFace Integration - Uses GPT-OSS-Safeguard-20B model via HuggingFace API
Tech Stack
- Django 5.2.7 - Web framework
- Django REST Framework - API development
- LangChain - AI agent orchestration
- LangGraph - Conversation state management
- HuggingFace - Language model integration (GPT-OSS-Safeguard-20B)
- Python 3.14+ - Programming language
- SQLite - Database (development)
- Uvicorn - ASGI server
Prerequisites
- Python 3.14 or higher
- pip (Python package manager)
- HuggingFace API Token (Get one here)
Installation
1. Navigate to the backend directory
cd backend
2. Create and activate a virtual environment
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On macOS/Linux:
source venv/bin/activate
# On Windows:
venv\Scripts\activate
3. Install dependencies
pip install -r requirements.txt
4. Set up environment variables
Create a .env file in the backend directory (or copy from the example):
cp .env.example .env # or: touch .env
At minimum, set the variables below (see Environment Variables for details):
# Required
SECRET_KEY=your-secret-key-here
HUGGINGFACEHUB_API_TOKEN=your-huggingface-api-token
# Common
DEBUG=True
BUILD_MODE=development # change to "production" for deployment
To generate a Django secret key:
python -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"
5. Run database migrations
python manage.py migrate
Environment Variables
Create a .env file in the backend directory. The backend loads variables from this file using python-dotenv.
Required
# Django Secret Key (generate one using the command above)
SECRET_KEY=your-secret-key-here
# HuggingFace API Token (any of these will be picked up; preferred shown first)
HUGGINGFACEHUB_API_TOKEN=your-huggingface-api-token
# HF_TOKEN=your-huggingface-api-token
# HF_API_TOKEN=your-huggingface-api-token
Core Runtime
# Debug mode (default: True)
DEBUG=True
# App build mode: "development" (default) or "production"
BUILD_MODE=development
# Port only used when running `python app.py` (Hugging Face Spaces)
# PORT=7860
Production-only
When BUILD_MODE=production, the following become relevant:
# Allowed hosts (comma-separated, no spaces)
ALLOWED_HOSTS=yourdomain.com,www.yourdomain.com
# CSRF trusted origins (comma-separated)
CSRF_TRUSTED_ORIGINS=https://yourdomain.com,https://www.yourdomain.com
Notes:
- Most security and CORS flags are derived automatically from
BUILD_MODEinbackend/settings.py:- In development: permissive defaults for local usage
- In production:
CORS_ALLOW_ALL_ORIGINS=False, secure cookies, HSTS, content type nosniff, and SSL redirect are enabled
- Do not set
SESSION_COOKIE_SECURE,CSRF_COOKIE_SECURE,CORS_ALLOW_ALL_ORIGINS, orSECURE_*directly via env; they are computed fromBUILD_MODE.
Running the Application
Development Mode
Option 1: Django Development Server (with warnings)
python manage.py runserver
The server will run on http://localhost:8000
Option 2: Uvicorn ASGI Server (production-like, no warnings)
uvicorn backend.asgi:application --host 0.0.0.0 --port 8000 --reload
Production Mode
# Set DEBUG=False in .env
uvicorn backend.asgi:application --host 0.0.0.0 --port 8000
# With multiple workers:
uvicorn backend.asgi:application --host 0.0.0.0 --port 8000 --workers 4
Standalone Script (for HuggingFace Spaces)
The backend can also be run as a standalone script:
python app.py
This uses the PORT environment variable (defaults to 7860) and is configured for HuggingFace Spaces deployment.
API Endpoints
Base URL
All endpoints are prefixed with /api/v1/
GET /api/v1/hello/
Health check endpoint.
Response:
{
"message": "Hello from Grammo!"
}
POST /api/v1/chat/
Send a message to start or continue a chat session.
Request Body:
{
"message": "Translate this text to French",
"chatSession": 0,
"mode": "default",
"tone": "default"
}
Parameters:
message(required): The user's messagechatSession(optional): Session identifier to maintain conversation contextmode(optional):"default"or"grammar"- Determines how the message is processedtone(optional):"default","formal", or"casual"- Sets the tone of the response
Response (Success):
{
"status": "success",
"response": "**Original**: \nTranslate this text to French\n**Output**: \nTraduisez ce texte en franΓ§ais\n___\n**Explanation**: \n> Direct translation maintaining the original meaning"
}
Response (Error):
{
"status": "error",
"response": "Invalid message."
}
POST /api/v1/end/
End the current chat session and clear conversation history.
Request Body:
{}
Response (Success):
{
"status": "success",
"message": "Session ended successfully"
}
Response (Error):
{
"status": "error",
"response": "No active session."
}
Project Structure
backend/
βββ agent_manager/ # AI agent management module
β βββ __init__.py # LangChain agent setup, session management
βββ api/ # Django REST API application
β βββ views.py # API view handlers (chat, hello, end)
β βββ urls.py # URL routing
β βββ apps.py # App configuration
βββ backend/ # Django project settings
β βββ settings.py # Django configuration
β βββ urls.py # Main URL configuration
β βββ asgi.py # ASGI application
β βββ wsgi.py # WSGI application
βββ app.py # Standalone entry point (HuggingFace Spaces)
βββ manage.py # Django management script
βββ requirements.txt # Python dependencies
βββ Dockerfile # Docker configuration for deployment
βββ README.md # This file
Development
Session Management
- Sessions are managed using Django's session framework
- Session data is stored in the cache backend (in-memory for development)
- Each session maintains its own LangGraph agent with conversation checkpointing
- Sessions expire after 24 hours of inactivity or when explicitly ended
Agent Architecture
- Uses LangChain's
create_agentwith a structured output wrapper - Structured output ensures consistent JSON responses for translation/correction tasks
- Agents are cached per session key for efficient memory usage
- Supports task types:
translation,correction,follow-up,invalid
Database
- Uses SQLite by default (suitable for development)
- No models are currently defined, but Django is configured for future database needs
- Run
python manage.py migrateto set up the database schema
Caching
- In-memory cache is used for sessions (development)
- Note: For production, consider switching to Redis or another persistent cache backend
CORS Configuration
- CORS is configured to allow cross-origin requests
- In production, configure
CORS_ALLOW_ALL_ORIGINSandALLOWED_HOSTSappropriately
Deployment
Docker Deployment (HuggingFace Spaces)
The backend includes a Dockerfile configured for HuggingFace Spaces deployment.
Set environment variables in your Space settings:
SECRET_KEYHUGGINGFACEHUB_API_TOKENBUILD_MODE=productionDEBUG=FalseALLOWED_HOSTS=your-space-name.hf.spaceCSRF_TRUSTED_ORIGINS=https://your-space-name.hf.space
Push your code to the Space repository
The API will be available at
https://your-space-name.hf.space/api/v1/
General Production Deployment
- Set production environment variables (see Environment Variables)
BUILD_MODE=production,DEBUG=FalseALLOWED_HOSTSandCSRF_TRUSTED_ORIGINS
- Configure a proper database (PostgreSQL recommended)
- Set up Redis or another cache backend for sessions
- Use a production ASGI server (Uvicorn with multiple workers or Gunicorn with Uvicorn workers)
- Configure reverse proxy (Nginx, Apache) with SSL/TLS
- Set up static file serving or use a CDN
Testing
To test the API endpoints:
# Health check
curl http://localhost:8000/api/v1/hello/
# Send a chat message
curl -X POST http://localhost:8000/api/v1/chat/ \
-H "Content-Type: application/json" \
-d '{"message": "Hello, translate this to Spanish", "mode": "default", "tone": "default"}'
# End session
curl -X POST http://localhost:8000/api/v1/end/
Troubleshooting
Common Issues
- Module not found errors: Ensure your virtual environment is activated and dependencies are installed
- Secret key errors: Make sure
SECRET_KEYis set in your.envfile - HuggingFace API errors: Verify your
HUGGINGFACEHUB_API_TOKENis valid - CORS errors: Check
CORS_ALLOW_ALL_ORIGINSandALLOWED_HOSTSsettings - Session not persisting: Ensure cache backend is configured correctly
Notes
- The application uses in-memory session storage for development. For production, consider using Redis.
- The HuggingFace model (
openai/gpt-oss-safeguard-20b) is used for all language processing tasks. - Conversation state is managed per Django session using LangGraph's checkpoint system.
- The structured output wrapper ensures responses follow a consistent JSON schema.
License
See the LICENSE file for details.