Venice AI API Telegram Bot – Open Source Release

A Telegram bot for integrating Venice.ai allowing users to generate text, retrieve model lists, and more.


Setup Instructions

This guide will help you set up and run the bot on Windows, Mac, and Linux.

1. Prerequisites

  • Python 3.9+ installed
  • Telegram bot token (from @BotFather)
  • Venice AI API key (from Venice.ai)
  • A .env file for storing credentials

2. Installation

Windows

  1. Install Python from python.org.
  2. Open Command Prompt and install required dependencies: pip install -U pip pip install python-telegram-bot requests python-dotenv
  3. Clone the repository or create a new folder and download the script.
  4. Create a .env file in the same folder: VENICE_API_KEY=your-venice-api-key TELEGRAM_BOT_TOKEN=your-telegram-bot-token
  5. Run the bot: python bot.py

MacOS & Linux

  1. Open Terminal and install dependencies: sudo apt update && sudo apt install python3-pip -y # For Linux brew install python3 # For Mac (if not installed) pip install python-telegram-bot requests python-dotenv
  2. Clone or create a project folder.
  3. Create a .env file and add: VENICE_API_KEY=your-venice-api-key TELEGRAM_BOT_TOKEN=your-telegram-bot-token
  4. Run the bot: python3 bot.py

Source Code

import os
import requests
import logging
from dotenv import load_dotenv
from telegram import Update, InputFile
from telegram.ext import Application, CommandHandler, MessageHandler, filters, CallbackContext

# Load environment variables
load_dotenv()
VENICE_API_KEY = os.getenv("VENICE_API_KEY")
TELEGRAM_BOT_TOKEN = os.getenv("TELEGRAM_BOT_TOKEN")

# Venice.ai API base URL
VENICE_API_BASE = "https://api.venice.ai/api/v1"
HEADERS = {"Authorization": f"Bearer {VENICE_API_KEY}", "Content-Type": "application/json"}

# Initialize bot application
app = Application.builder().token(TELEGRAM_BOT_TOKEN).build()

# Set up logging
logging.basicConfig(
    format="%(asctime)s - %(levelname)s - %(message)s", level=logging.INFO
)
logger = logging.getLogger(__name__)

### --- Command Handlers --- ###

async def start(update: Update, context: CallbackContext) -> None:
    await update.message.reply_text(
        "Welcome to the Venice AI Bot! 🎨\n\n"
        "Use the following commands:\n"
        "/generate_text - Generate text\n"
        "/models - List available models"
    )

async def get_models(update: Update, context: CallbackContext) -> None:
    response = requests.get(f"{VENICE_API_BASE}/models", headers=HEADERS)
    
    if response.status_code != 200:
        await update.message.reply_text(f"Error: API request failed with status {response.status_code}")
        return

    try:
        data = response.json()
        models = "\n".join([f"{m['id']} ({m['type']})" for m in data.get("models", [])])
        await update.message.reply_text(f"Available Models:\n\n{models}")
    except requests.exceptions.JSONDecodeError:
        await update.message.reply_text("Error: Unable to parse API response.")

async def generate_text(update: Update, context: CallbackContext) -> None:
    prompt = " ".join(context.args)
    if not prompt:
        await update.message.reply_text("Usage: /generate_text <your prompt>")
        return

    payload = {
        "model": "llama-3.1-405b",  # Replace with your desired model ID
        "messages": [
            {"role": "user", "content": prompt}
        ]
    }

    response = requests.post(f"{VENICE_API_BASE}/chat/completions", json=payload, headers=HEADERS)
    
    logger.info(f"API Response Status: {response.status_code}")
    logger.info(f"API Response Body: {response.text}")

    if response.status_code != 200:
        await update.message.reply_text(f"Error: API request failed with status {response.status_code}")
        return

    try:
        data = response.json()
        if 'choices' in data and len(data['choices']) > 0:
            generated_text = data['choices'][0]['message']['content']
            await update.message.reply_text(generated_text)
        else:
            await update.message.reply_text("Failed to generate text.")
    except requests.exceptions.JSONDecodeError:
        await update.message.reply_text("Error: Unable to parse JSON response.")

### --- Set Up Handlers --- ###

app.add_handler(CommandHandler("start", start))
app.add_handler(CommandHandler("models", get_models))
app.add_handler(CommandHandler("generate_text", generate_text))

### --- Start Bot --- ###
if __name__ == "__main__":
    logger.info("Bot is running...")
    app.run_polling()

Donation Addresses

If you’d like to support development, consider donating to these addresses:

  • BTC: bc1qzf6dtxqu6dwts8a7x4sez38af85826pk4jcseg
  • LTC: ltc1qsdl2dwy47gzdt8tu9h5ptl555zl0u0emd2f6fr
  • DOGE: DTpKBxKDcnUFuW4Z8R6VZUzH7XSSBjwh1k
  • ETH/BNB/VVV: 0x529C3f796016301556Fe5402079cac7f409C9104

Enjoy the bot! 🚀

Leave a Reply

Your email address will not be published. Required fields are marked *