Roadmap

Novacut Wiki Home > Roadmap

Note that our goal isn't to have certain features; our goal is to make the best tool possible for our target users. So although we have this roadmap with features we currently plan, these can always change based on more feedback from our target users, or changes in the day to day realities our target users face (say, new cameras).

The roadmap will make more sense if you understand our design process. Each feature will go through a 3 step process:

  1. Research and UX design
  2. Rapidly iterate on UI with prototypes and measure results
  3. Pick best UI prototype, do full implementation on live backend

In practice we'll stagger these, and so at any given time we're probably working on implementation for some feature, prototyping another, and doing research on yet another. Some things must be completed in a certain order, something can be done independently.

Either way, we'll be doing monthly time-based releases of all components.

UX/UI

Storyboard view

D O N E!

Multicam

We've done enough research have a clear idea of how we'll first approach this. Time for a serious of prototypes. This is actually the first place we'll use the Timeline View, but we feel that multicam needs to drive the bus here.

Timeline View

Here we mean bringing what we learned from multicam back into our single-cam sequence. This will add the ability to toggle a standard-sequence between Storyboard and Timeline view.

We think we have a solid idea here, but we might learn some surprising things from the multicam work.

Logging

Thanks in particular to Troy James Sobotka and Matt Reyer, we have a pretty solid idea of where we'll start, although a bit more research is needed.

Even so, we're ready to start prototyping.

Nested Sequences

Thus far we haven't spotted places where we can improve much on the state of the art here.

But I think a good place to focus is on making the drill-down/drill-up as fast and clear as possible, and on keeping the user always-well oriented in the overall story-structure. Compared to the competition, we probably have more screen-space we can use to keep the user-oriented, so lets find a smart way to use it.

More research is needed, but we have a good handle on the key issues and might as well start prototyping.

Good-Enough Audio

Haven't started the research, so we should avoid prototyping till we have a much better handle on the problem.

Good-Enough Color Grading

Haven't started the research, so we should avoid prototyping till we have a much better handle on the problem.

Backend

Port to GStreamer 1.0

It doesn't make sense to do anything more with GStreamer 0.10, because of both where we're at development-wise and where GStreamer 1.0 is at development-wise.

So we're going to port to GStreamer 1.0 ASAP. This will allow us to drop the remaining bits of Python2 code we have, and be all Python3 all the time.

Turn quality up to 11

We're going to use the cloud for large-scale, automated testing with real-world video files.

This will focus on helping to improve and maintain the quality in GStreamer and key codec libraries that it wraps (like libav, all the xiph codecs, etc).

Current quality issues aren't really an engineering problem, they're a data problem. Developers need a way to get rapid feedback and potential regressions a code change might cause with specific real-world video files, through a wide range of playback, transcoding, and NLE scenarios.

Pre-seek slices in Gnonlin

Accelerated compositing in WebKitGtk

Preview server using Gnonlin

Thumbnail server over HTTP

Novacut/Roadmap (last edited 2012-06-25 03:53:16 by 173-14-15-225-Colorado)