Repose is a React Native app that lets you adjust facial expressions in photos after they're taken - all from a single image. No need for video or multiple shots - just capture the moment and naturally manipulate expressions later. Built to seamlessly integrate with iOS 18's design language, it combines on-device and cloud image models to enable intuitive manipulation of facial features through simple gestures or precise slider controls.
The app is powered by the LivePortrait model (served via Replicate), while maintaining the familiar Photos app experience through thoughtful UI and haptic feedback. Custom shaders provide smooth loading animations that enhance the user experience during model inference, applying effects only to the subject by using on-device selfie segmentation models. I built this app to explore how we might reduce "capture anxiety" - shifting focus from getting the perfect shot to being present in the moment, knowing expressions can be naturally adjusted later.
repose_1.mp4 |
repose.mp4 |
This project represents both a technical and design exploration. It serves as a comprehensive investigation into:
-
Hybrid model deployment: The app demonstrates how to effectively combine on-device ML models with cloud-based models to create a seamless UX. This hybrid approach balances immediate responsiveness with powerful image editing capabilities.
-
Reducing capture anxiety: The project explores how expression editing can reduce the pressure of capturing the perfect moment. Traditional photography often creates anxiety around timing - the need to catch the exact right smile or expression. By enabling natural, gesture-based manipulation of facial expressions after capture, this project aims to shift focus from technical perfection to being present in the moment. Users can simply take the photo and adjust expressions later through intuitive direct interaction, making the entire picture taking experience more relaxed and meaningful.
- Extending iOS design language: The app serves as a case study in extending and applying the iOS 18 design language to new interaction patterns. It carefully adopts familiar UI components and animations from the Photos app while thoughtfully expanding them to support novel features:
- Maintains consistency with system gestures and transitions
- Preserves familiar navigation patterns and UI components such as sliders, carousels, and image editing controls
- Extends existing UI components to support new functionality
- Implements haptic feedback patterns that match system behaviors
The project is organized as a monorepo using Turborepo. It consists of the following apps:
-
apps/expo
: The main React Native app built with Expo.- Handles user interface and interactions
- Implements gesture controls for facial expression editing
- Manages local caching of results
- Implements a Photos app-like UI with carousels and grid views
-
apps/web
: An API server built with Next.js.- Provides API endpoints for inference requests to the Replicate model
- Handles server-side caching using Vercel KV and Blob Storage
- Manages photo uploads and storage
- Direct manipulation of facial features using intuitive gestures
- Real-time preview of expression changes
- Precise control over facial features via sliders with haptic feedback
- Support for:
- Face rotation (pitch, yaw, roll)
- Eye position and blinking
- Smile intensity
- Eyebrow position
While the app preemptively generates and caches common facial expressions for immediate access, users may occasionally request uncached variations. In these cases, the app leverages an on-device selfie segmentation model to display a nice loading state. The model isolates the subject from the background, allowing the app to apply animated shader effects specifically to the person in the image. This creates a pleasant loading experience that clearly communicates that the app understands the image and is working on generating a new expression.
Segmentation | Shader animation | Final result |
---|---|---|
![]() |
shader_animation.mov |
final_result.mov |
-
On-device segmentation models: The app uses Mediapipe's selfie segmentation model to identify and separate different parts of an image, such as the face, hair, and background. This segmentation data is then used to apply shaders selectively to different segments.
-
Shaders: A custom shader is applied to the segmented parts of the image. These shaders are written using Skia's runtime effects. See apps/expo/components/WaveShader.ts for implementation details.
-
Animation: The shaders are animated using React Native Reanimated, which allows for smooth and performant animations without blocking the UI thread. The animation parameters, such as time and position, are dynamically updated to create a looping effect.
The app implements a subtle but important UX detail to help users intuitively feel the boundaries of possible expressions. When using drag gestures to adjust facial features, users manipulate a focal point represented by an animated ball. As this point approaches the edge of possible expressions, a rubber band margin effect kicks in:
- Within a certain margin of the edge, the focal point begins to resist movement
- If released within this margin, it smoothly animates back to the nearest valid position
- The effect is similar to stretching your neck to its limit and feeling it naturally pull back
- This creates an embodied interaction that clearly communicates boundaries without breaking flow
- Particularly satisfying with eyebrow controls, where you can feel the natural limits of a frown
This approach feels natural and intuitive when using drag gestures because it maps to our physical understanding of how objects behave at their limits. Rather than using hard stops or visual indicators, it leverages our innate sense of elasticity and resistance. The smooth animation and gradual increase in resistance provides tactile feedback about expression limits while maintaining a playful, natural feel to the interactions.
Focal point debug | Face margins | Eyebrow margins |
---|---|---|
rubber_band_debug.mp4 |
rubber_band_face.mov |
rubber_band_eyebrows.mov |
The app uses a sophisticated multi-level caching strategy combined with proactive processing to provide a low-latency responsive experience. When a user uploads a new photo, the app immediately begins processing a predefined set of common expression variations in the background. Rather than allowing infinite combinations of parameters, the app quantizes input values (e.g. rotation angles are limited to 15° increments) to maximize cache hits while still allowing enough flexibility to generate a wide range of expressions. This means that most user interactions will hit a pre-warmed cache:
- In-memory caching: Generated expressions are temporarily stored in memory using a dedicated cache object for instant retrieval. The app leverages view transitions to proactively populate this cache with images that may be needed soon.
- Local storage caching: Results are cached locally using
AsyncStorage
. If found, the result is stored in the in-memory cache for faster subsequent access. Provides offline access to previously generated expressions. - Server-side Caching: Vercel Blob Storage and Vercel KV (Redis) are used for server-side caching. The server generates a cache key based on input parameters. It checks Redis and blob storage for cached results. If not found, the model is run, and the result is cached in Redis and Blob Storage for future requests.
- Browse and select photos from a gallery
- Upload new photos
- Grid view for photo library
- Photo detail view
Before running the app, ensure you have the following installed:
- Node.js (v14 or later)
- Yarn package manager
- Expo CLI
You will also need:
- Replicate API token (for model inference)
- Vercel account (for API server deployment)
-
Create a
.env
file inapps/web
:BLOB_READ_WRITE_TOKEN= KV_REST_API_TOKEN= KV_REST_API_URL= REPLICATE_API_TOKEN=
-
Update the file in
apps/expo/api/constants.ts
:export const BASE_URL = "https://your-app-name.vercel.app"; // or localhost if you are running locally
-
Deploy the COG version of the LivePortrait model on Replicate. Or you can use
https://replicate.com/fofr/expression-editor
, but it will be slower as it runs on L40S. -
Update the model identifier in
apps/web/pages/api/replicate.ts
:const MODEL_IDENTIFIER = "YOUR-REPLICATE-MODEL-IDENTIFIER";
-
Clone the repository:
git clone https://github.com/your-username/expression-editor.git cd expression-editor
-
Install the dependencies:
yarn install
To run the Expo app:
-
Navigate to the
apps/expo
directory:cd apps/expo
-
Start the Expo development server:
yarn start
-
Follow the instructions in the terminal to run the app on an iOS or Android simulator, or scan the QR code with the Expo Go app on your mobile device.
To run the web app:
-
Navigate to the
apps/web
directory:cd apps/web
-
Start the Next.js development server:
yarn dev
-
Update the endpoint in
apps/expo/api/constants.ts
to point tohttp://localhost:3000
.
- COG (Custom Operator Graph) version of LivePortrait model by fofr: https://github.com/fofr/cog-expression-editor
- Mediapipe's selfie segmentation model
- React Native
- Expo
- React Native Reanimated - Animation library
- React Native Gesture Handler - Touch handling
- React Native Skia - Graphics and drawing
- react-native-fast-tflite - GPU-accelerated on device model inference
- Next.js
- TypeScriptt
- Axios
- Turborepo - Monorepo build system
- Vercel KV (Redis) - Key-value storage
- Vercel Blob Storage - File storage
- Replicate - ML model hosting
This project is licensed under the MIT License - see the LICENSE file for details.