This guide explains how to setup ExecuTorch for Android using a demo app. The app employs a DeepLab v3 model for image segmentation tasks. Models are exported to ExecuTorch using XNNPACK FP32 backend.
The app is built with Kotlin and Jetpack Compose, providing a modern, declarative UI experience.
- Image Segmentation: Detects and highlights all 21 PASCAL VOC classes (Person, Dog, Cat, Car, etc.)
- Overlay Visualization: Segmentation mask blends with the original image at 50% opacity
- Inference Time Display: Shows model inference latency in milliseconds
- In-App Model Download: Download the model directly from the app
- Image Picker: Select any image from your device's gallery
- Sample Images: 3 built-in sample images for quick testing
- Download and install Android Studio and SDK 34.
- (For exporting the DL3 model) Python 3.10+ with
executorchpackage installed.
The app can download the model automatically. If you want to export it yourself:
cd dl3/python
python export.py- Connect your device to your computer via USB.
- Enable USB debugging on your device.
- Open Android Studio and create a new virtual device.
- Start the emulator by clicking the "Play" button next to the device name.
cd dl3/android/DeepLabV3Demo
./gradlew installDebug
adb shell am start -n org.pytorch.executorchexamples.dl3/.MainActivity- Open the project at
dl3/android/DeepLabV3Demo - Wait for Gradle sync to complete
- Click "Run app" (Control + R)
When the app launches, tap the "Download Model" button. The model will be downloaded and extracted automatically.
If you exported the model yourself or want to use a custom model, you need to copy it to the app's private storage. Since the app is built in debug mode, we can use run-as:
# 1. Push to device temporary storage
adb push dl3_xnnpack_fp32.pte /data/local/tmp/
# 2. Copy to app's private storage using run-as
adb shell "run-as org.pytorch.executorchexamples.dl3 cp /data/local/tmp/dl3_xnnpack_fp32.pte files/"Note: For QNN backend, change the maven dependency to executorch-qnn and rebuild the app.
Tap "Next sample image" to cycle through 3 built-in sample images.
- Tap "Pick Image" to open your device's gallery
- Select any image (it will be automatically resized to 224x224)
- Tap "Run" to perform segmentation
- Tap "Run" to start inference
- The segmentation overlay appears blended with the original image
- Inference time is displayed below the image
Tap "Reset" to restore the original image without the segmentation overlay.
The app detects all 21 PASCAL VOC classes with distinct color overlays:
| Class | Color | Class | Color |
|---|---|---|---|
| Person | Red | Dog | Green |
| Cat | Magenta | Car | Cyan |
| Bird | Yellow | Bicycle | Green |
| Boat | Blue | Bottle | Orange |
| And 13 more... |
./gradlew connectedAndroidTestOpen app/src/androidTest/java/org/pytorch/executorchexamples/dl3/SanityCheck.kt or UIWorkflowTest.kt and click the Play button.
-
SanityCheck.kt: Basic module forward pass test
- Downloads model automatically if not present
- Tests model loading from app's private storage
- Validates model output shape (batch_size × classes × width × height)
-
UIWorkflowTest.kt: Compose UI workflow tests including:
- Initial UI state verification
- Download button functionality (with and without model present)
- Model run/segmentation testing with inference time display
- Next button to cycle through sample images
- Reset button functionality
- Complete end-to-end workflow (Next → Run → Reset)
- Multiple consecutive runs to test model reusability