Ever wished you had a professional camera operator to highlight and follow the action, ensuring the audience never misses a beat? Thanks to NVIDIA, you can now do this at home for free! The new NVIDIA AR SDK unlocks augmented reality features, including motion tracking for faces.
This allows me to provide you with an automated zoom and cropping solution for your video camera to transform your streams into a slick, polished broadcast, where you’ll always be the star of the show. Don’t forget - everything is customizable so the possibilities are endless. You can even recreate that Futurama squinting meme if you wanted to (with some scripting)!
The filter requires compatible Nvidia RTX hardware and the Nvidia AR SDK Runtime to be installed ahead of time. This filter is considered "stable" and shouldn't change much from version to version.
Adds support for specifying Minimum Bitrate directly in the UI instead of requiring custom settings to do so. Additionally Adaptive I/B-Frames are now only shown if Look-Ahead is a value greater than 0 frames.
Quality Minimum can also now be left at a default value of -1, the Quality group is no longer toggleable and Quality Target moved into the group. Settings options on the context is now searching children too (if there are any).
Finally, some C++17 formatting was done.
Fixes#101
This drastically improves stability and prevents all exceptions from leaking into libobs C code, which prevents crashes and unexpected freezes from exception handlers further down the stack.
Additionally minor work was done to further improve the quality and user experience for the filter.
Caching the output of a source is only necessary for really expensive to render sources, so it is disabled by default now. Thanks to that, most Source Mirrors are now "free" instead of requiring two context switches and a texture, while those really expensive can be manually set to cache.
The scaling mode is also set to disabled instead of point when rescaling is off to further improve performance. The previous method would incorrectly cause an extra texture to be used.
Additionally we now have support for debug markers for graphics debugging, allowing us to exactly tell apart improvements in rendering cost for this source.
Adds a new property to control the alignment of the source within the calculated boundary when rescaling the source. Also fixes the permanently left aligned mirror at the same time.
This filter allows the use of another source as a mask, allowing complex filter graphs and trippy effects, such as creating a text source with three animated videos, each using a different color channel as the mask.
Fixes#18.
Automatically updates the long description (hover text) if the Type or Subtype field is changed, allowing for more contextual information about what the selected information does.
This refactors the SDF Effects to use a normal blend function instead of doing the blend in the effect itself, improving quality and reducing problematic sampling issues. In addition to this, the effect files have been cleaned up slightly and renamed to their proper names. Glow and Stroke are now supported, which solves both #2 and #4 in one go.
The caching optimization has also now been implemented, reducing the number of renders for this filter to 1 for each tick.