πŸ“± FunctionGemma-270M-IT: Mobile Actions (LiteRT)

Model License Fine-tuned from

This model is a specialized fine-tune of Google's FunctionGemma-270M-IT, specifically optimized for Mobile Actions. It is packaged in the .litertlm format, making it ready for immediate deployment on mobile devices using LiteRT (formerly TensorFlow Lite).

πŸš€ Deployment & Usage

1. Direct Use on Android

You can deploy this model directly using the Google AI Edge Gallery App.

  1. Download the model file: mobile-actions_q8_ekv1024.litertlm
  2. Open the Google AI Edge Gallery App: Navigate to the "Mobile Actions" section.
  3. Load the Custom Model: Point the app to your downloaded .litertlm file.

2. Prompt Format

FunctionGemma requires an Essential System Prompt to activate its function-calling logic.

System Prompt (Developer Role):

You are a model that can do function calling with the following functions

Example User Input:

"Create a calendar event for lunch tomorrow at 12 PM"

Model Output:

<start_function_call>call:create_calendar_event{title:"Lunch", start_time:"2024-xx-xxT12:00:00"}<end_function_call>


❀️ Support & Community

If you find this model useful for your mobile AI projects, please consider leaving a Like ❀️ on the Hugging Face repository!

Special thanks to Google for the Gemma 3 architecture and the open-source community for making edge AI accessible.


Disclaimer: This model is designed for specialized function calling. It is not intended for general-purpose chatbot dialogue.

🌟 Key Highlights

  • Ultra-Lightweight: At just 270M parameters, it's designed for low-latency, on-device inference without needing cloud connectivity.
  • Expert at Mobile Tasks: Translates natural language (e.g., "Turn on the flashlight", "Set an alarm for 7 AM") into structured function calls for Android OS tools.
  • Privacy Centric: Runs entirely offline, ensuring user queries and data remain private and secure on the device.
  • Optimized Format: Provided as a quantized q8 .litertlm file for the best balance between performance and accuracy on edge hardware.

πŸ›  Model Technical Details

Detail Specification
Base Model google/functiongemma-270m-it
Architecture Gemma 3 (270M)
Input Content 32K Tokens
Quantization Q8 (8-bit)
Format .litertlm (LiteRT)
Primary Task Function Calling / Mobile Actions

πŸ“– Background & Dataset

This model was fine-tuned using the Mobile Actions dataset and the official Google fine-tuning recipe.

It demonstrates how a small, specialized model can achieve state-of-the-art performance on specific agentic workflows, rivaling much larger models in specialized domains.


Downloads last month
25
Safetensors
Model size
0.3B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for JackJ1/functiongemma-270m-it-mobile-actions-litertlm

Finetuned
(321)
this model
Finetunes
1 model