PALADIM Model - Sentiment Analysis Demo
This is a PALADIM (Pre Adaptive Learning Architecture of Dual-Process Hebbian-MoE Schema) model fine-tuned on IMDB sentiment analysis.
Model Details
- Base Model: distilbert-base-uncased
- Task: Sentiment Analysis (Binary Classification)
- LoRA Rank: 16
- Training Data: IMDB dataset (subset)
- Parameters: Only ~0.3% trainable (LoRA adapters)
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel
# Load base model
model_name = "distilbert-base-uncased"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load PALADIM LoRA adapters
model = PeftModel.from_pretrained(model, "nickagge/paladim-sentiment")
# Inference
text = "This movie was absolutely fantastic! I loved every minute of it."
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
outputs = model(**inputs)
prediction = torch.argmax(outputs.logits, dim=-1)
print("Sentiment:", "Positive" if prediction == 1 else "Negative")
PALADIM Architecture
This model demonstrates PALADIM's Plastic Memory component using LoRA adapters for rapid task adaptation.
Full PALADIM includes:
- ๐ง Plastic Memory (LoRA) - Fast adaptation
- ๐ก๏ธ Consolidation Engine (EWC + KD) - Prevent forgetting
- ๐ Mixture of Experts - Sparse activation
- ๐ฏ Meta-Controller - Adaptive learning
Training
- Epochs: 2
- Final Loss: 0.0000
- Final Accuracy: 100.00%
Citation
@software{paladim2024,
title={PALADIM: Pre Adaptive Learning Architecture of Dual-Process Hebbian-MoE Schema},
author={nickagge},
year={2025},
url={https://huggingface.co/nickagge/paladim}
}
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for nickagge/paladim-sentiment
Base model
distilbert/distilbert-base-uncased