```html Backend Integration Guide - PromptCraft

Backend Integration Guide

Connect your AI assistant to real models using Hugging Face, OpenAI, or local transformers

Hugging Face Inference API

Recommended Models

  • mistralai/Mistral-7B-Instruct-v0.1 - Best quality
  • google/flan-t5-xxl - Good instruction following
  • microsoft/DialoGPT-large - Conversational AI

Setup Steps

  1. 1. Get HF API token from settings
  2. 2. Install requests library
  3. 3. Implement API calls
  4. 4. Handle responses

Python Implementation

import requests

class HuggingFaceAPI:
    def __init__(self, api_key):
        self.api_key = api_key
        self.base_url = "https://api-inference.huggingface.co/models"
        
    def query_model(self, model_name, prompt, max_length=500):
        headers = {"Authorization": f"Bearer {self.api_key}"}
        payload = {
            "inputs": prompt,
            "parameters": {
                "max_length": max_length,
                "temperature": 0.7,
                "do_sample": True
            }
        }
        
        try:
            response = requests.post(
                f"{self.base_url}/{model_name}",
                headers=headers,
                json=payload
            )
            result = response.json()
            return result[0]["generated_text"]
        except Exception as e:
            return f"Error: {str(e)}"

# Usage
hf_api = HuggingFaceAPI("your_hf_api_key_here")
response = hf_api.query_model(
    "mistralai/Mistral-7B-Instruct-v0.1",
    "Generate a Python function to calculate factorial"
)
print(response)

Gradio Integration Example

import gradio as gr

def chat_interface(message, history):
    hf_api = HuggingFaceAPI("your_hf_api_key")
    response = hf_api.query_model("mistralai/Mistral-7B-Instruct", message)
    return response

iface = gr.ChatInterface(
    chat_interface,
    title="PromptCraft AI Assistant",
    description="Powered by Hugging Face models"
)
iface.launch()