Boston: Want to pet the cat in a video? That may soon be possible, thanks to MIT scientists who are developing a new imaging technique that will let you reach in and "touch" objects in videos.

Using traditional cameras and algorithms, the technique called Interactive Dynamic Video (IDV) looks at the tiny, almost invisible vibrations of an object to create video simulations that users can virtually interact with.

"This technique lets us capture the physical behaviour of objects, which gives us a way to play with them in virtual space," said Abe Davis, a PhD student at Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL).

"By making videos interactive, we can predict how objects will respond to unknown forces and explore new ways to engage with videos," Davis said.

IDV has many possible uses, from filmmakers producing new kinds of visual effects to architects determining if buildings are structurally sound, said Davis.

For example, he shows that, in contrast to how the popular Pokemon Go app can drop virtual characters into real-world environments, IDV can go a step beyond that by actually enabling virtual objects (including Pokemon) to interact with their environments in specific, realistic ways, like bouncing off the leaves of a nearby bush.

The most common way to simulate objects' motions is by building a 3D model. However, 3D modelling is expensive, and can be almost impossible for many objects.

While algorithms exist to track motions in video and magnify them, there are not ones that can reliably simulate objects in unknown environments.

The research shows that even five seconds of video can have enough information to create realistic simulations.

To simulate the objects, the team analysed video clips to find "vibration modes" at different frequencies that each represent distinct ways that an object can move.

By identifying the shape of these modes, researchers can predict how these objects will move in new situations.

"Computer graphics allows us to use 3D models to build interactive simulations, but the techniques can be complicated," said Doug James, a professor at Stanford

University who was not involved in the research. "Davis and his colleagues have provided a simple and
clever way to extract a useful dynamics model from very tiny vibrations in video, and shown how to use it to animate an image," James said.

Davis used IDV on videos of a variety of objects, including a bridge, a jungle gym and a ukelele. With a few mouse-clicks, he showed that he can push and pull the image, bending and moving it in different directions.

He even demonstrated how he can make his own hand appear to telekinetically control the leaves of a bush. Researchers say that the tool has many potential uses in engineering, entertainment, and more.