Throughout the years, filmmakers have experimented with a myriad of artistic ideas and methodologies. Making and breaking rules creates new pathways through the medium, changing and refining how films are put together.

As has the advent of new technology.

In early days, filmmakers interested in the idea of procedural editing would create complex mathematical equations to determine how scenes could be cut to create tension or add drama. They might also cut to match a score length or to achieve a mathematical congruence with another variable in their project.

The introduction of optical printers allowed filmmakers the ability to re-shoot strips of film, opening doors for creativity and special effects.

In more recent years, powerful hardware and software, along with programming languages, have allowed artists to take further steps to grow their work. While procedural editing approaches supplant the ideas of creativity and vision for math and science, there is a method in them. When looked at as tools to communicate a vision or deliver on creativity, math and science have plenty to offer the filmmaker.

One example of an idea or concept that has been in use, in one form or another, for the better part of a century is algorithmic editing.

What is algorithmic editing?

While it sounds complex, algorithmic editing in it’s simplest form refers to editing footage to a schema or plan.

That schema could be as simple as “every four frames, switch from camera A to camera B for two frames, and then return to camera A”, or even simpler: “use one frame of footage from A and one from B, and repeat.”

This idea that footage will follow a pre-planned roadmap or direct procedural approach is the basis for algorithmic editing. In terms we can get our heads around, it refers to a technique for cutting and reassembling footage based on a schema, outline or model.

Before going too far, an algorithm itself is, according to the good folks at Oxford, “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.”

This is a perfect way to describe and introduce algorithmic editing. Early visionary filmmakers such as Dziga Vertov and Sergei Eisenstein took a mathematical, schematic approach to their montages and scenes within their films, creating complex editing charts to guide their projects.

While Eisenstein’s vision was enabling audiences to see the ideological aspect of the films they were watching through the emotional and psychological influence imbued by his use of this technique, Vertov actually believed his mathematical techniques cleaned up the process and could lead to the betterment of mankind due to the unerring, absolutely precise method he followed.

A couple of incredible articles by Winnepeg-based filmmaker, Clint Enns describe the history and the practical application of algorithmic editing in greater detail than this writer can even comprehend. As a supplement to this article, we highly recommend reading his works.

Putting it to use

Many other early filmmakers utilized algorithmic editing techniques to great effect, many in the creation of experiential films. The concept certainly lends itself to more avant-garde approaches to filmmaking.

Take, for example, the idea of a basic flicker film. Using two pieces of footage—one all black, the other all white—editing together a piece that takes a frame from one clip and then the other, repeating until the clips end. This technique would yield a very unique—and likely nauseating—result. That said, it’s a very pure algorithmic approach.

The linear nature of that piece, however, need not be the norm for algorithmically edited projects. Many algorithms developed in software will contain random variables that could change a portion of the equation.

So we’d take the same project, but add in a random variable to handle how many frames to pick from the second clip only, leaving only a single frame from the first clip. What we would end up with would be a frame from clip one, a random number of frames from clip two, another single frame from clip one…etc.

What’s different?

While this is an unorthodox approach to editing, it likely also sounds nearly impossible using the tools most of us use every day. This brings us to how algorithmic editing is different from our filmic approach.

When we edit clips in Premiere Pro, FCPX, Media Composer, or whatever platform is preferred, we look at our footage as a clip—something with a start, middle and ending—that we are going to snip away at. It’s very timeline-based, and rightly so; our editing platforms are based on editing film. When we drop a clip into our timeline we are looking at the digital equivalent of an analog film clip.

In algorithmic editing, the approach is to consider the footage differently—we’re not accessing it in the same way. We aren’t looking at the entire clip and deciding what to keep and what to get rid of. Using this technique we are considering direct access to any frame of any clip within a location. That location could be a database or simply a folder on your machine.

This isn’t to take away from our editing platforms—they certainly allow us access to each and every frame of every clip we import—but putting a composition together even using a simple schema like the “one frame from A, one frame from B, repeat” would be onerous and time-consuming.

Writing a snippet of code to handle the same process would take very little time. The code accesses the footage not as a piece of film, but as a container full of data, therefore a folder or database is like a giant container full of the audio and video data from each frame contained therein.

So, if code can directly access any frame of any clip in a container, then there ought to be a ton of possibilities for using algorithmic editing creatively. It’s possible to use the variables existing in a piece of footage, such as pixel variation and audio deviations, to programmatically determine where to make cuts and how to put a piece of footage back together again.

By that description, the model for edit points can even be a music score. Cory Arcangel has done a handful of impressive experiments using music as the edit driver. One example of his work includes a YouTube video of Paganini’s Fifth Caprice, edited together from hundreds of guitar instructional videos. This was likely accomplished by using the original piece and writing an algorithm that parsed the database of YouTube videos looking for notes that would match. While difficult to watch, the accomplishment is an excellent example of algorithmic editing at work.

Who’s doing it?

A great example of a modern talent using algorithmic editing—and demonstrating exactly how he does it—is Devon Crawford, a brilliant young Canadian programmer who has been exploring software and electrical engineering through his own research and documenting it on YouTube.

Devon’s channel covers a wide gamut of subjects of interest for the tech-friendly, software development crowd, such as reviewing a game he coded in first year university, downloading your Google private data—and discovering some crazy things about it—how he got into coding, and so on.

Devon, it seems, has since moved on to larger projects that aren’t being documented as closely online, but one of his more interesting experiments involve using algorithms to edit clips for him based on code.

In this video he writes a program that edits a video for him, in which he experiments a bit with variables for different outcomes. As he gets to refining the application he uses motion detection—remember pixel variation?—to determine the cuts to his video, in essence creating a video that uses portions of his clips with the most differentiation.

Pretty interesting, right? There were companies in recent years that hoped to leverage this sort of technology to automatically edit action camera footage. With bright blue skies and hot pink wake boards, an algorithm such as Devon’s would’ve likely done as good a job as a physical editor.

Other organizations have been experimenting with automated or algorithmic editing. Disney created a multi-camera editing algorithm that uses different variables to determine edit points. In their project, the software estimates the 3D motion of each camera, allowing it to lock in on what the cameras are paying most attention to. This allows the software to correctly determine when and where to make a cut to another camera. Disney’s algorithm can also determine which camera has the best view of the subject and switch to it, so the cut isn’t just timely, but it’s smart.

Not quite a human editor, but certainly getting closer.

What the future holds…

While many of us will stick with our filmic approach to editing, it should be noted that machine learning, and the application of algorithms has already been all around us for some time. Algorithms determine upscaling and render optimization, keyframe interpolation and other aspects of our work. More complex algorithms will continue to shape all of our work, as companies like Adobe continue to focus on machine learning with Adobe Sensei.

Not totally convinced? Take a look at After Effects’ Content Aware Fill feature. It looks at surrounding pixels and makes an estimation to fill a gap in real-time.

So whether we choose to approach editing from a programmatic angle or stick with our traditional approach, algorithms have been around in cinema for a century and will continue to be more and more intrinsic to our editing functions over time. Will algorithmic editing become “automated editing” in the future? Probably in some cases, but there’s probably an argument for it.

So does the future of cinema have a use for algorithmic editing? Imagine a scenario where a piece of software automatically cuts and pushes out 25 rough cuts of a movie. The studio assembles test audiences to watch the drafts while cameras, sensors and additional software measure their reactions to each and every frame of the video. Based on that data, the studio assembles a final, “perfected cut.”