What is VidSync?

First developed in 2009 and maintained in compatibility with modern Macs, VidSync allows scientists to record and organize complex measurements by clicking on their study subjects on a video screen. It is most often used for “stereo” video measurement from a pair of side-by-side, underwater cameras in fisheries research. It is equally applicable in freshwater, saltwater, laboratory, and even terrestrial settings.

Intuitive user interface

VidSync was designed to be both easy to learn and powerful for users trying to efficiently organize and analyze thousands of scientific measurements.

Measure anything visible

VidSync natively records 3-D coordinates, lengths, and timecodes, plus custom notes and codes, making it easy from these to compute velocities, volumes, rates, and other data.

Multiple export options

Save your data to a spreadsheet or preserve more detailed relationships and metadata with an XML file readable in R, Python, etc.

Powerful playback controls

Going far beyond play-pause, VidSync allows deeply customizable playback and frame-jumping controls rooted in real-world scientific measurement applications and experience.

Minimal hardware required

Apart from cameras and a Mac, the hardware required for 3-D measurement with VidSync (for calibration) can be easily made by a student for a few hundred dollars using templates found here.

Peer-reviewed performance

A 2016 publication in the Canadian Journal of Fisheries and Aquatic Sciences detailed the mathematics underlying VidSync and demonstrated sub-millimeter accuracy.

VidSync in action

The powerful playback and data entry features of VidSync, combined with high accuracy, enable an exciting array of different types of video analysis applications.

VidSync measurements are performed by clicking on video clips, with a highly customizable array of measurement icons, annotations, and overlays that provide scientific insights, cues to assist further data entry, and diagnostics of the measurement mathematics.

The video on the left was used to compute the space use of each individual juvenile Chinook Salmon in a school, the first steps toward a publication on territoriality. Some issues with treating these pretty shapes as true “space use” led to interesting new methods for analyzing 3-D spatial data.

In this video, yellow dots are the estimated prey detection locations of an Arctic grayling drift feeding (holding at one position and darting around to intercept items of prey carried to it by the current). Natural and artificial tracers released into the water allowed interpolation of a 3-D water velocity all around the fish. The fish’s position was measured when it started toward prey and again when it captured a piece, and the item’s position when it was detected was estimated by back-calculating through the water velocity field.

In this example, Vidsync was used to digitize the head and tail positions of a juvenile salmon throughout a prey capture maneuver. Data like this were eventually used to build and test a new model of energy costs.

Learn more

This site holds all the resources needed for any researcher, from the student level upward, to perform research with VidSync.

Technical Background

Learn the basics of the mathematical principles powering VidSync’s 3-D measurements and see the manuscript laying out the details.

Project Showcase

Dozens of research projects worldwide have used VidSync to remotely measure aquatic animals (especially fish) and their environments and study their spatial behavior in 3 dimensions. Read about a few of them here.

User manual and tutorial

This website provides all the important details for using VidSync in written form, and a two-hour video tutorial can help make a new user proficient within a day.

More features of VidSync

Easy calibration


The two objects required to calibrate videos for measurement are easy and inexpensive to build (a few hundred dollars at most). Actual calibration takes about 5 minutes in the field or lab, and about 10 on the computer once you’re familiar with the process. A new user can learn the process from scratch in less than an hour.

Measurement accuracy and the magnified preview

Although VidSync’s mathematical methods are capable of sub-millimeter accuracy, your measurements are only as good as your input. VidSync has some powerful tools to let you quickly input points for measurement with as much precision as you want.

When you click on a point to measure in VidSync, you don’t need to worry about exactly where you’re clicking, or which pixel on under your mouse pointer is being measured. A magnified, live preview image of the video places a tiny dot (or an elaborate reticle) directly over the point you’re measuring. After clicking, use the arrow keys to manipulate the measurement position with sub-pixel precision.

Advanced playback control


VidSync replaces the standard Quicktime playback controls with a new control bar tailored for scientific analysis. We often need to be able to stop immediately when we observe something of interest, then back up and re-watch it two or three times, maybe at different speeds. Or maybe we need to jump around the video at fixed intervals (or systematically randomized ones). In a normal video player these tasks are distractingly awkward. In VidSync they’re so easy that after the first hour or two you aren’t even thinking about them.

Easy organization, annotation, and retrieval of measurements


VidSync organizes measurements into a hierarchy of “objects,” such as fish, and “events,” such as prey capture attempts. Each event may have multiple measured points. Events (such as a conflict between fish) can also be associated with multiple objects (i.e. Vidsync supports many-to-many relationships).


Objects determine the color of the symbol that overlays the video to indicate measured point (for example, I give different fish different colors). Events determine the size and shape of the symbol, how long it stays onscreen, whether it’s connected by a line to other points in the same event (e.g., for a length measurement event), and other visual properties. You don’t have to set these attributes for every individual event. Instead, you define event types (i.e. length measurement, prey capture attempt, etc.) and individual events take on the properties of their type.


Many small features of this system have a large effect on the ease with which you can record and retrieve measurements:

  • You can add arbitrary notes to every object or event, useful for sophisticated coding of observations.
  • The point selected in the lists shows a square crosshair selection indicator around its symbol on the video overlay.
  • You can select a point through the menus and click “Go” to jump the video to its location.
  • You can right-click near a point on the video overlay to select it in the lists, and click “Go” to jump to the frame on which it was recorded.
  • You can split an object into two, or merge two objects into one, preserving all the measured events.
  • Sort the lists by any field, or use the “Sort” button for the default order (first type, then index, then name).


This system has some very flexible applications. Below, I used it to digitize the boundary of the woody debris behind the fish, for estimating distance-to-cover, as well as the river bottom.

Development of VidSync

VidSync began in 2009 as part of Jason Neuswanger‘s fisheries PhD dissertation at the University of Alaska Fairbanks, based partially on methods pioneered by his mentor Nick Hughes and colleague Lon Kelly. Jason is now a senior research scientist in salmon recovery with the Washington Department of Fish and Wildlife and continues to advise VidSync users informally as needed.

VidSync is hosted as an open-source repository on GitHub and is theoretically available under the MIT license for anyone to review or modify as they see fit, provided they don’t sell it.

Scroll to Top