HydroShell-1.2B: Liquid Linux Expert
HydroShell-1.2B is a specialized, multilingual fine-tuned version of the Liquid AI (LFM 2.5 1.2B) model. It is optimized to act as a high-performance, low-latency assistant for Linux system administration, shell scripting, and DevOps automation.
By leveraging the Liquid Foundation Model architecture, HydroShell excels at processing long-form technical instructions and mapping complex natural language (English, Arabic, and Tamil) to functional Bash one-liners.
⚠️ Safety & Destructive Command Warning
WARNING: This model is designed to generate powerful system-level commands. It can and will generate destructive commands (e.g.,
rm -rf,mkfs, or overwriting configurations with>).
- Always verify commands in a sandbox or test environment before executing them on production systems.
- The model may occasionally hallucinate flags or mix Linux distributions (e.g., suggesting
pacmanfor Ubuntu systems).
Model Details
- Developed by: [Your Name/MindLab]
- Base Model: LiquidAI/LFM2.5-1.2B-Instruct
- Architecture: Liquid Foundation Model (Dynamical Systems-based)
- Primary Domain: Linux CLI, Bash Scripting, System Hardening.
- Languages Supported: English, Arabic (Technical).
Evaluation Results (Zero-Shot Testing)
The following results were observed during a 100-prompt "Stress Test" covering System Audit, Security, and File Management.
Technical Performance Matrix
| Category | Accuracy | Notes |
|---|---|---|
Basic Admin (ls, cd, mkdir) |
98% | Flawless execution. |
Log Parsing (awk, sed, grep) |
75% | Occasionally confuses line vs. field flags. |
| Systemd & Services | 90% | Strong understanding of service lifecycles. |
Networking (iptables, ss) |
82% | Occasional source/destination flag inversion. |
Multilingual Capability
- Arabic: 90% Accuracy in intent recognition. Successfully maps Arabic technical terms like "حظر" (Block) and "مزامنة" (Sync).
- English: 95% Accuracy in intent recognition.
Known Issues & Limitations
- Distro Confusion: The model may suggest Arch Linux (
pacman) commands when asked for Ubuntu tasks if the prompt is not specific. - Redirection Risks: In some tests, the model used
>(overwrite) instead of>>(append) for configuration files. - Hallucination: For very complex
findcommands, it may invent non-existent flags (e.g.,-md5).
Usage (Python)
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "your-username/HydroShell-1.2B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True)
messages = [{"role": "user", "content": "البحث عن العمليات التي تستهلك أكبر قدر من الذاكرة"}]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=64, temperature=0.3)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Citation
If you use this model in your research or projects, please cite the base Liquid AI model and this fine-tuned version.
---
- Downloads last month
- 20
Model tree for yasserrmd/HydroShell-1.2B
Base model
LiquidAI/LFM2.5-1.2B-Base