Many shader examples share quite a bit of code, and the OBS Studio parser and GPU driver shader compiler actually get rid of unused code quite well. So we can simply share the code between many examples, which drastically improves the quality of the code.
While the long descriptions were useful, keeping the updated and translated is pretty much impossible. Technology moves fast and not everyone that translates the project knows a lot about technology.
Therefore the long descriptions have now been replaced with a button that opens the wiki page for the feature instead. This should drastically reduce the number of help cases, and improve the translation coverage at the same time.
There is hardly any reason for us to recalculate everything all the time. LUTs can cache the work once, and then re-use it every time necessary, drastically reducing the impact of Color Grading by almost 60% (on some GPUs even more). Additionally this fixes the negative gamma issue, which plagued the filter for a while.
In the future, once PR 4199 (https://github.com/obsproject/obs-studio/pull/4199) has been merged, we can cut away one intermediate rendering step currently required to make the effect work. Hopefully this will be with the 27.x release of OBS Studio.
For simple image and video editing, LUTs (Look-Up Tables) are vastly superior to running the entire editing operation on each pixel - especially if all the processing can be done inside a single shader.
Due to the post-processing requirements for our LUTs, we are limited to 8 bits per channel - though clever use of the unused Alpha channel may result in additional space. For our purposes however, this is definitely enough.
* New translations en-US.ini (Turkish)
* New translations en-US.ini (Sinhala)
* New translations en-US.ini (Spanish)
* New translations en-US.ini (Czech)
* New translations en-US.ini (Serbo-Croatian)
Adds support for the AMD Advanced Media Framework H.264 and H.265 encoders via FFmpeg. The majority of settings are supported, and the UI/UX experience mimics that of the NVENC implementation. Various settings are left out due to their complexity and should be controlled via the custom parameters field.
Implements a manual and automatic update checker with support for both release and testing update channels, allowing users to stay as up to date as possible. It is fully compliant with privacy regulations around the world, as it stays completely silent and inactive until the user gives the Ok to connect to GitHub for the latest releases.
Adds colored and inverted colored variations for Luma Burn, enabling some more fancy transitions with it. All variations with color support smooth fading, and allow choosing any possible color for the transition.
* "Quality" Minimum/Maximum is actually QP Minimum/Maximum
* Bitrate Limits is now just Limits
* Buffer Size and Quality Target have been moved into "Limits".
The high priority CUDA stream causes libOBS to be at a lower priority than the tracking, which is not what we want. Instead we want tracking to be incomplete in those cases, rather than slowing down encoding and other things.
Geometry updates are also now done once per frame instead of one per tracking update, which should improve the smoothness without affecting performance too much. Additionally all tracking info is now in the 0..1 range, which drastically simplifies some math - especially with texture coordinates.
To deal with tracking and updates being asynchronous, a very simple approximation of movement velocity has been added. This is mostly wrong, but it can bridge the gap where tracking updates are slower, as the values are all filtered anyway.
The new libOBS API allows us to directly access the underlying API instead of having to mess around in memory. By using it we can avoid crashing in case the compiler for it is different, or in case the actual back end structure changes.
Additionally the mostly unimplemented and unused options have also been removed, which streamlines the use of this class even further and reduces both shader and code complexity.
Finally by optimizing the use of the internal render target we can achieve a speed up of up to 3000% over the old way, allowing for many more mipmapped filters.