Object detection
Run a bundled SSD MobileNet detector or bring your own Core ML model for labeled detections.
Local Core ML vision for macOS
A menu bar workspace for real-time object detection, emotion monitoring, privacy guardrails, and focus sessions from camera or screen capture.
Four workflows
Run a bundled SSD MobileNet detector or bring your own Core ML model for labeled detections.
Detect faces first, classify the visible emotion, and keep a short recent history.
Count visible people and start the screen saver when your configured threshold is reached.
Use native Apple Vision head-pose tracking to count focused time toward a session goal.
Camera or screen
On-device by design
Mac Vision Tools uses Core ML, Vision, AVFoundation, and ScreenCaptureKit on your Mac. Camera and screen frames are processed for live detections and are not saved by the app.
Permissions are explicit: camera capture needs Camera access, and screen capture needs Screen Recording access in macOS System Settings.
The app does not create accounts, run analytics, track users, or send camera or screen content to a server. Custom Core ML models selected by the user stay on the Mac.
Privacy Guard starts the macOS screen saver when the configured person threshold is reached. macOS controls whether the screen saver requires a password.
Open source macOS utility
The app ships with bundled models and also accepts custom Core ML model files for mode-specific experiments.