π± FunctionGemma-270M-IT: Mobile Actions (LiteRT)
This model is a specialized fine-tune of Google's FunctionGemma-270M-IT, specifically optimized for Mobile Actions. It is packaged in the .litertlm format, making it ready for immediate deployment on mobile devices using LiteRT (formerly TensorFlow Lite).
π Deployment & Usage
1. Direct Use on Android
You can deploy this model directly using the Google AI Edge Gallery App.
- Download the model file: mobile-actions_q8_ekv1024.litertlm
- Open the Google AI Edge Gallery App: Navigate to the "Mobile Actions" section.
- Load the Custom Model: Point the app to your downloaded
.litertlmfile.
2. Prompt Format
FunctionGemma requires an Essential System Prompt to activate its function-calling logic.
System Prompt (Developer Role):
You are a model that can do function calling with the following functions
Example User Input:
"Create a calendar event for lunch tomorrow at 12 PM"
Model Output:
<start_function_call>call:create_calendar_event{title:"Lunch", start_time:"2024-xx-xxT12:00:00"}<end_function_call>
β€οΈ Support & Community
If you find this model useful for your mobile AI projects, please consider leaving a Like β€οΈ on the Hugging Face repository!
Special thanks to Google for the Gemma 3 architecture and the open-source community for making edge AI accessible.
Disclaimer: This model is designed for specialized function calling. It is not intended for general-purpose chatbot dialogue.
π Key Highlights
- Ultra-Lightweight: At just 270M parameters, it's designed for low-latency, on-device inference without needing cloud connectivity.
- Expert at Mobile Tasks: Translates natural language (e.g., "Turn on the flashlight", "Set an alarm for 7 AM") into structured function calls for Android OS tools.
- Privacy Centric: Runs entirely offline, ensuring user queries and data remain private and secure on the device.
- Optimized Format: Provided as a quantized
q8.litertlmfile for the best balance between performance and accuracy on edge hardware.
π Model Technical Details
| Detail | Specification |
|---|---|
| Base Model | google/functiongemma-270m-it |
| Architecture | Gemma 3 (270M) |
| Input Content | 32K Tokens |
| Quantization | Q8 (8-bit) |
| Format | .litertlm (LiteRT) |
| Primary Task | Function Calling / Mobile Actions |
π Background & Dataset
This model was fine-tuned using the Mobile Actions dataset and the official Google fine-tuning recipe.
It demonstrates how a small, specialized model can achieve state-of-the-art performance on specific agentic workflows, rivaling much larger models in specialized domains.
- Downloads last month
- 25