Local Core ML vision for macOS

Mac Vision Tools

A menu bar workspace for real-time object detection, emotion monitoring, privacy guardrails, and focus sessions from camera or screen capture.

Four workflows

Built for live visual awareness

Standard

Object detection

Run a bundled SSD MobileNet detector or bring your own Core ML model for labeled detections.

Emotion

Face emotion cues

Detect faces first, classify the visible emotion, and keep a short recent history.

Privacy

Presence threshold

Count visible people and start the screen saver when your configured threshold is reached.

Focus

Attention timer

Use native Apple Vision head-pose tracking to count focused time toward a session goal.

Camera or screen

Choose the capture source and display style

Mac Vision Tools standard detection window
Windowed detection preview
Mac Vision Tools emotion monitoring panel
Emotion mode controls
Mac Vision Tools privacy guard panel
Privacy guard threshold

On-device by design

Inference stays local

Mac Vision Tools uses Core ML, Vision, AVFoundation, and ScreenCaptureKit on your Mac. Camera and screen frames are processed for live detections and are not saved by the app.

Permissions are explicit: camera capture needs Camera access, and screen capture needs Screen Recording access in macOS System Settings.

The app does not create accounts, run analytics, track users, or send camera or screen content to a server. Custom Core ML models selected by the user stay on the Mac.

Privacy Guard starts the macOS screen saver when the configured person threshold is reached. macOS controls whether the screen saver requires a password.

Open source macOS utility

Build it, run it, adapt it

The app ships with bundled models and also accepts custom Core ML model files for mode-specific experiments.