Compare commits

...

88 Commits

Author SHA1 Message Date
Michael Fabian 'Xaymar' Dirks bbcce86c47 Fix or disable some useless warnings 2023-10-05 09:11:53 +02:00
Michael Fabian 'Xaymar' Dirks e0ffe85a30 Simplify the CMake file even more
- target_sources(... PUBLIC ...) doesn't do what I thought it did, and has no useful purpose here.
- Experimental features are an Alpha only thing, and Unstable features should not be part of a Candidate release.
- ENABLE_LTO is not a flag anymore, as CMake has a global flag for it.

While we haven't split out Core from the main file yet, and we still keep running into strange duplicate symbol or undefined symbol errors, this will hopefully simplify the CMake file further. End goal is to eventually split StreamFX into smaller sub-plugins that can operate mostly independently. At some point, the goal is to be able to soft-depend on other components, i.e. Blur can softly depend on Dynamic Mask, and then have extra features if the component is installed. This is not quite fleshed out yet, and I have no clear idea on how to make it work.
2023-10-05 09:11:53 +02:00
Michael Fabian 'Xaymar' Dirks 43dbd81d0e Don't set values that may have side effects during build tests 2023-10-04 08:52:53 +02:00
Michael Fabian 'Xaymar' Dirks b373ba17d3 Fix up copyright headers once again 2023-10-04 07:32:33 +02:00
Michael Fabian 'Xaymar' Dirks 69a6849033 Github wants .adoc, not .ad 2023-10-04 07:32:33 +02:00
Michael Fabian 'Xaymar' Dirks b5c4c27463 Switch to AsciiDoc 2023-10-04 07:28:56 +02:00
Isaac Nudelman 487769fd15 Fix link ordering errors with ld on Linux 2023-10-04 07:28:47 +02:00
Michael Fabian 'Xaymar' Dirks 0efbaa6afb Strip out unnecessary packaging logic 2023-10-04 07:28:36 +02:00
Michael Fabian 'Xaymar' Dirks 9a8be4d8e7 Fix up bundles for MacOS installation 2023-10-04 07:28:36 +02:00
Michael Fabian 'Xaymar' Dirks 34f0306040 Remove Qt 5.x and Ubuntu 20.04 builds 2023-10-04 06:36:18 +02:00
Michael Fabian 'Xaymar' Dirks 2277c60e5e Opt for more modern linkers on CI 2023-10-01 06:32:10 +02:00
Michael Fabian 'Xaymar' Dirks 54b6df0fd0 Potential fix for linker issues 2023-10-01 06:32:10 +02:00
Michael Fabian 'Xaymar' Dirks 0b99ef1be1 nvidia: Fix header includes now that they're in include not source 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 1eecb35c83 autoframing: I have no idea why this is necessary
There does not appear to be a reason for this to cause a compiler error, but it does on MSVC. To be precise, the 'grp2' part causes it if there is not an underscore behind it. A classic "doesn't work without this comment" problem.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks ef55651d9c nvidia: Fix missing includes 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 7fb8c6fea2 nvidia: Require explicit set/get commands
This addresses some unexpected behaviors, and might even fix a feature or two.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 4982a7900e Fix incorrect target_compile_definitions calls 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks b9b4dba686 nvidia: Actually test for windows 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks df70723884 ffmpeg: Don't break on MacOS
While AMF is not really available on MacOS, we still shouldn't just fail to compile because of it. Might as well do the test and if it doesn't work out, then we still behave the same as before.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 915c85e60e core: Frontend and Updater are default features 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks a63eb8b80a denoising: Check if NVIDIA component is available 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 34e754d474 upscaling: Check if NVIDIA component is available 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 4ebc96997e autoframing: Check if NVIDIA component is available 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 3239f5e5b9 virtual-greenscreen: Check if NVIDIA component is available 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks afcd5dfea9 nvidia: We only support Windows at the current time
While a Linux version is (supposedly) available for this functionality, at the current time we have no integration for it. Nor do we have any way to test it either, so it is better to disable it for now.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 92ddbd1330 Fix up some dependency logic in component resolving 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 4cf2a399f4 Update Copyright headers
These now include all history, which has fixed some headers that used to be wrong.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 4339a5f853 Update copyright.js tool
It will now properly sort authors by date, and follow renames, which should give a much better coverage of copyright information.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 0e913edccf Update component logic to support required and optional resolving
This allows resolving a dependency tree up to 10 elements deep, but a different solution may be necessary in the future. A better alternative in the future might be to keep a copy of the unresolved entries and then compare every loop, instead of limiting to a fixed number of cycles.

This currently doesn't address cyclic dependencies, since I'm not quite sure how those would work with the current model anyway.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 92b93a2479 nvidia: Add optional dependencies to the NVIDIA component 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks a48a32931a Update build guide with latest instructiosn 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 434936baf6 Split Find/Resolve/Link component discovery stages 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 7c887c06e8 nvidia: Move into its own component
This component enables interactivity with NVIDIA libraries. Currently this is limited to NVIDIA Maxine only.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 090f49d3c8 Add NVIDIA Maxine Audio Effects SDK as a third party dependency 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks e6c81ca71e Always build Frontend and Updater
We now require these features all the time, as they are becoming more of a core part of the StreamFX UI. Additionally several components rely on these already being present, so omitting them is not a great idea.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 72b0daca05 upscaling: Move into its own component 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 484c790c2a virtual-greenscreen: Move into its own component 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks e3ddbe4336 denoising: Move into its own component 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks d7d8253518 autoframing: Move into its own component 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 7ebe4f5631 sdf-effects: Move into its own component 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 65e91fbbc4 mirror: Move into its own component
Soon to be replaced by Spout/Sink
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 5d5852c8f7 color-grade: Move into its own component
Another re-usable code section that never got reused. This one is actually more useful, so I might split it into its own component eventually.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 4f845ac996 blur: Move into its own component
This still contains some of the old reusable code, which was never used in the first place. I'm unsure what the end goal for it was, as nothing really ended up using it anywhere else.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 02f8ca8d83 transform: Move into its own component 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 792bf163b4 dynamic-mask: Move into its own component 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks ecaf39bee1 shader: Move into its own component 2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks d5cf2d2ccf ffmpeg: Move into its own component
While we're at it, let's also fix the invalid destructor, as well as the NVENC HEVC encoder incorrectly using H264.Level to store H265.Level.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks d2a543f118 core: Clean up some older C++ code
- Remove float_t and double_t usage, as they aren't related to sized types.
- Remove unused aligned types, their usage has been replaced quite a while ago.
- Update the templates for pow and is_power_of_two.
2023-09-30 09:25:30 +02:00
Michael Fabian 'Xaymar' Dirks 6b02b76e6c Add prefix to commit titles when needed 2023-09-30 09:25:30 +02:00
brighten cfcf975794 fix: add decimal place to remove ambiguity
error: Error compiling shader:
0(142) : error C1101: ambiguous overloaded function reference "log(int)"
    (0) : lowp float log(lowp float)
    (0) : mediump float log(mediump float)
    (0) : float log(float)

error: device_pixelshader_create (GL) failed
error: Pass (0) <> missing pixel shader!
error: [StreamFX] <filter::color_grade> Error loading '/usr/local/share/obs/obs-plugins/StreamFX/effects/color-grade.effect': Unknown error during effect compile.
error: [StreamFX] Unexpected exception in function '_create': Unknown error during effect compile..
error: Failed to create source 'Color Grading'!
2023-09-30 04:46:14 +02:00
Michael Fabian 'Xaymar' Dirks 3d3aef47af Reorder the template for issues and bugs 2023-09-07 06:55:01 +02:00
Michael Fabian 'Xaymar' Dirks 8f7dd1ba4e Remove useless pull request template 2023-09-07 06:55:01 +02:00
Michael Fabian 'Xaymar' Dirks 0af846ea00 Migrate building guide from wiki to code
This should always have been part of the code, but hey - we learn at some point and improve ourselves.
2023-09-07 06:55:01 +02:00
Michael Fabian 'Xaymar' Dirks a3b80daa54 Update Contributor guidelines
Removes the prefixes from commit titles, as they served no other purpose than to complicate things. While we originally copied this style from obs-studio, it has been increasingly clear that the short description usually already describes what the prefix would also describe. And in case it doesn't, you can just simply filter by file or directory, and get the same result.
2023-09-07 06:55:01 +02:00
Michael Fabian 'Xaymar' Dirks ac307a4912 cmake: Add common include directories and fix Windows
Microsoft has some very annoying #define's which break most if not all of C++ at random spots. Best disable them globally so we don't have to ever deal with them. Also the MSVC CRT warnings are completely pointless, they are just whining that we use the standard instead of their non-portable functionality.
2023-09-07 03:43:24 +02:00
Michael Fabian 'Xaymar' Dirks 54cd3eef5b cmake: Actually add sources to the Core component 2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks d8a673a578 cmake: Always provide at least one file to a target
While this would normally work no questions asked in something like 'make', 'nmake' or similar, it is an impossible task in CMake without an empty file. So we'll just provide it with an empty file.
2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks efb6e9f0cb cmake: Only enable Qt on components, not on the module
The module only holds the resources file, so Qt is not needed here.
2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks 0ce977b9dd cmake: Uncomment still working code 2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks 98403126ad cmake: Fix missing public info, and remove PROJECT_NAME usage
Using PROJECT_NAME makes it incompatible with add_subdirectory, and it's really not necessary anyway. There are no plans to rename the project again.

Also needed to expose some information to be public, so that components could actually use it. Seems to be working as intended finally.
2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks 8fb37b8d21 cmake: Fix up missing sub-components due to add_subdirectory
add_subdirectory creates a new "stack" of variables, so PARENT_SCOPE points nowhere. Well it points to the outside of the function, which is not outside of the subproject.
2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks d82d3901e4 cmake: Remove remnants of AOM AV1 2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks f26565cf1e cmake: Remove clang integration, as it breaks on the new system 2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks 9021274297 cmake: Fix up missing linked objects in component system
We should always link the whole object, even if nothing is needed by the module itself.
2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks 25ba51df12 code: Throw an error on nullptr for util::library::load 2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks 50c85608c3 cmake: Initial work towards component-ification
The old fake component system is starting to be very annoying to work with, as it doesn't properly split things apart. The new system should aid with this significantly, and make errors easier to spot.
2023-09-03 15:32:46 +02:00
Michael Fabian 'Xaymar' Dirks e82823d49c cmake: Explicitly disable treating warnings as errors
As libOBS and OBS Studio unfortunately enforce treating warnings as errors, it is necessary to do the opposite. This may remove the need of having a patch for this exist at all, but I'll leave it be for now and just add this single line fix.
2023-07-31 15:28:39 +02:00
Michael Fabian 'Xaymar' Dirks 8b97c2b23d templates: Fix the remaining uncommitted changes 2023-05-20 20:52:40 +02:00
Michael Fabian 'Xaymar' Dirks 9df2f01963 templates: Pascal uses <> instead of != 2023-05-20 20:34:25 +02:00
Michael Fabian 'Xaymar' Dirks ffb7a6c5d7 code: Add GoPro CineForm to FFmpeg Encoders 2023-05-20 19:54:46 +02:00
Michael Fabian 'Xaymar' Dirks f66fabc5d4 templates: Move to 'usercf' instead of 'userpf'
Local (per-user) add-ons to software should reside in "C:\Users\Username\AppData\Local\Programs\Common\", similar to System (all-users) add-ons which reside in "C:\Program Files\Common Files\".

Fixes #1049
2023-05-20 19:54:15 +02:00
Michael Fabian 'Xaymar' Dirks 38d87f6fcf code: Don't crash if there is no encoder instance 2023-05-20 19:54:05 +02:00
Michael Fabian 'Xaymar' Dirks 3e13126f89 code: Remove audio encoder registration from FFmpeg Encoders 2023-05-20 19:25:46 +02:00
Michael Fabian 'Xaymar' Dirks 9d0233a740 code: Create mutexes to prevent Windows (un)installer from continuing
Might fix the problem where people uninstall StreamFX while they still have OBS Studio open with StreamFX loaded. InnoSetup appears to ignore this in /VERYSILENT, so this is an additional guard against that.
2023-05-20 19:24:06 +02:00
Michael Fabian 'Xaymar' Dirks 07182d2f89 templates: Exit-early if the user aborts the removal of an older version 2023-05-20 19:24:06 +02:00
Michael Fabian 'Xaymar' Dirks 5bdcefd618 code: Fix support for multiple FFmpeg version
This should make it compile just fine with older FFmpeg versions again, such as on Ubuntu 20.04.
2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks 0402c8ef60 code: Adjust copyright headers
Doesn't appear to follow renames, so i guess this is the new copyright!
2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks 1c76169821 code: Migrate encoder::ffmpeg::nvenc to new loader 2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks 51282b7b85 code: Migrate encoder::ffmpeg::amf to new loader 2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks d8235bf504 code: Migrate encoder::ffmpeg::dnxhd to new loader 2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks 0fb670eba4 code: Migrate encoder::ffmpeg::prores_aw to new loader 2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks 376a3d6233 code: Overriding color format doesn't work without a pointer or reference 2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks fc8ebc7bf3 code: Rename encoder::ffmpeg::prores_aw 2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks 78310f9c63 code: Migrate encoder::ffmpeg::debug to new loader 2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks 85c8cdf8bd code: Wrong return type for get_avcodeccontext
The context should be modifiable, we don't really care about it anyway. If it's broken, then it's broken and the encoder errors out.
2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks c4461e70b9 code: Migrate encoder::ffmpeg to modern handler loader
A different version of the dynamic loader allows us to simply register handlers at load time, instead of requiring custom code. Could also make it so that it loads them when needed, but since they're mostly static code, this won't matter much.
2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks a1968b970b code: Migrate encoder::ffmpeg handlers into proper directory
Shouldn't have an effect on functionality, only affects location.
2023-05-16 15:19:11 +02:00
Michael Fabian 'Xaymar' Dirks 21f8a66c7f cmake: Mark encoder::ffmpeg::nvenc as Stable 2023-05-16 06:04:59 +02:00
240 changed files with 3979 additions and 4437 deletions

View File

@ -7,12 +7,6 @@ title: "REPLACE ME"
description: "This form is for bug and crash reports only, primarily used by developers. Abuse of this form will lead to a permanent interaction ban."
labels: ["bug"]
body:
- type: textarea
attributes:
label: "OBS Studio Logs"
description: "Paste the content or attach the log files from OBS Studio here. In the event of a crash, paste or attach both the crash log and the normal log file."
validations:
required: true
- type: textarea
attributes:
label: "Current and Expected Behavior"
@ -25,6 +19,12 @@ body:
description: "What steps are required to consistently reproduce the bug/crash/freeze?"
validations:
required: true
- type: textarea
attributes:
label: "Log files & Crash Dumps"
description: "Paste the content or attach the log files from OBS Studio here. In the event of a crash, paste or attach both the crash log and the normal log file."
validations:
required: false
- type: textarea
attributes:
label: "Any additional Information we need to know?"

View File

@ -1,15 +0,0 @@
### Explain the Pull Request
<!-- Describe the PR in as much detail as possible. If possible include example images, videos and documents, and explain why it is necessary. If this is related to a discussion or issue, please also link them. -->
#### Completion Checklist
<!-- Check all items that apply. Don't lie here, we'll know the moment we verify this. -->
- [ ] This has been tested on the following platforms: <!-- REQUIRED (at least one) -->
- [ ] MacOS 10.15
- [ ] MacOS 11
- [ ] MacOS 12
- [ ] Ubuntu 20.04
- [ ] Ubuntu 22.04
- [ ] Windows 10
- [ ] Windows 11
- [ ] The copyright headers and license files have been updated. <!-- REQUIRED -->
- [ ] I will maintain this for the forseeable future, and have added myself to `CODEOWNERS`. <!-- REQUIRED for content or feature additions -->

View File

@ -312,24 +312,13 @@ jobs:
strategy:
fail-fast: false
matrix:
runner: [ "ubuntu-22.04", "ubuntu-20.04" ]
compiler: [ "GCC-12", "GCC-11", "Clang-16" ]
qt: [ 5, 6 ]
runner: [ "ubuntu-22.04" ]
compiler: [ "GCC-12", "Clang-16" ]
qt: [ 6 ]
CMAKE_GENERATOR: [ "Ninja Multi-Config" ]
exclude:
- runner: "ubuntu-22.04"
qt: 5
- runner: "ubuntu-22.04"
compiler: "GCC-11"
- runner: "ubuntu-20.04"
qt: 6
- runner: "ubuntu-20.04"
compiler: "GCC-12"
include:
- runner: "ubuntu-22.04"
name: "Ubuntu 22.04"
- runner: "ubuntu-20.04"
name: "Ubuntu 20.04"
runs-on: "${{ matrix.runner }}"
name: "${{ matrix.name }} (${{ matrix.compiler }}, Qt${{ matrix.qt }})"
env:
@ -359,6 +348,7 @@ jobs:
echo "CMAKE_C_COMPILER=gcc-${compiler[1]}" >> "$GITHUB_ENV"
echo "CMAKE_CXX_COMPILER=g++-${compiler[1]}" >> "$GITHUB_ENV"
echo "CMAKE_LINKER=gold" >> "$GITHUB_ENV"
elif [[ "${compiler[0]}" == "Clang" ]]; then
curl -jLo /tmp/llvm.sh "https://apt.llvm.org/llvm.sh"
chmod +x /tmp/llvm.sh
@ -373,6 +363,7 @@ jobs:
echo "CMAKE_C_COMPILER=clang-${compiler[1]}" >> "$GITHUB_ENV"
echo "CMAKE_CXX_COMPILER=clang++-${compiler[1]}" >> "$GITHUB_ENV"
echo "CMAKE_LINKER=ld.lld-${compiler[1]}" >> "$GITHUB_ENV"
else
echo "Unknown Compiler"
exit 1
@ -381,13 +372,8 @@ jobs:
id: qt
shell: bash
run: |
if [[ ${{ matrix.qt }} -eq 5 ]]; then
sudo apt-get -y install -V \
qtbase5-dev qtbase5-private-dev libqt5svg5-dev
elif [[ ${{ matrix.qt }} -eq 6 ]]; then
sudo apt-get -y install -V \
qt6-base-dev qt6-base-private-dev libqt6svg6-dev libgles2-mesa-dev libegl1-mesa-dev libgl1-mesa-dev
fi
sudo apt-get -y install -V \
qt6-base-dev qt6-base-private-dev libqt6svg6-dev libgles2-mesa-dev libegl1-mesa-dev libgl1-mesa-dev
- name: "Dependency: Prebuilt OBS Studio Dependencies"
id: obsdeps
shell: bash
@ -458,9 +444,6 @@ jobs:
-DCMAKE_C_COMPILER="${{ env.CMAKE_C_COMPILER }}" \
-DCMAKE_CXX_COMPILER="${{ env.CMAKE_CXX_COMPILER }}" \
-DCMAKE_INTERPROCEDURAL_OPTIMIZATION=ON \
-DCMAKE_INSTALL_PREFIX="${{ github.workspace }}/build/ci/install" \
-DPACKAGE_NAME="streamfx-${{ env.PACKAGE_NAME }}" \
-DPACKAGE_PREFIX="${{ github.workspace }}/build/package" \
-Dlibobs_DIR="${{ github.workspace }}/build/obs/install"
- name: "Build: Debug"
continue-on-error: true

3
.gitmodules vendored
View File

@ -23,3 +23,6 @@
[submodule "third-party/obs-studio"]
path = third-party/obs-studio
url = https://github.com/obsproject/obs-studio.git
[submodule "third-party/nvidia-maxine-afx-sdk"]
path = third-party/nvidia-maxine-afx-sdk
url = https://github.com/NVIDIA/MAXINE-AFX-SDK.git

View File

@ -3,5 +3,6 @@ Michael Fabian 'Xaymar' Dirks <info@xaymar.com> <github@xaymar.com>
Vainock <39059951+Vainock@users.noreply.github.com> <contact.vainock@gmail.com>
Charles Fettinger <charles@oncorporation.com> <charles@onacloud.org>
Charles Fettinger <charles@oncorporation.com> <charles@Oncorporation.com>
Radegast Stravinsky <radegast.ffxiv@gmail.com> <radegast.ffxiv@gmail.com>
Radegast Stravinsky <radegast.ffxiv@gmail.com> <58457062+Radegast-FFXIV@users.noreply.github.com>
Carsten Braun <info@braun-cloud.de> <info@braun-software-solutions.de>

153
BUILDING.md Normal file
View File

@ -0,0 +1,153 @@
# Building
This document intends to guide you through the process of building StreamFX. It requires understanding of the tools used, and may require you to learn tools yourself before you can advance further in the guide. It is intended to be used by developers and contributors.
## Building Bundled
<details open><summary>The main method to build StreamFX is to first set up an OBS Studio copy and then integrate the StreamFX repository into it.</summary>
1. [Uninstall](Uninstallation) any currently installed versions of StreamFX to prevent conflicts.
2. Follow the [OBS Studio build guide](https://obsproject.com/wiki/install-instructions) for automated building on your platform of choice.
- **MacOS:** You will need to use the XCode generator to build StreamFX as the Ninja generator does not support the flags StreamFX requires.
3. Integrate StreamFX into the OBS Studio build flow:
1. Navigate to `<obs studio source>/UI/frontend-plugins`
2. Open a `git` enabled shell (git-bash on windows).
3. Run `git submodule add 'https://github.com/Xaymar/obs-StreamFX.git' streamfx`.
4. Run `git submodule update --init --recursive`.
5. Append the line `add_subdirectory(streamfx)` to the `CMakeLists.txt` file in the same directory.
4. Run the same steps from the build guide in step 2 again.
5. Done. StreamFX is now part of the build.
</details>
## Building CI-Style
<details><summary>This method is designed for continuous integration and releases, and requires significant knowledge of CMake, OBS, and various other tools. Additionally it is not guaranteed to work on every machine, as it is only designed for use in continuous integration and nowhere else. It may even stop being maintained entirely with no warning whatsoever. You are entirely on your own when you choose this method.</summary>
#### Install Prerequisites / Dependencies
- [Git](https://git-scm.com/)
- **Debian / Ubuntu:** `sudo apt install git`
- [CMake](https://cmake.org/) 3.20 (or newer)
- **Debian / Ubuntu:** `sudo apt install cmake`
- A compatible Compiler:
- **Windows**
[Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 (or newer)
- **MacOS**
Xcode 11.x (or newer) for x86_64
Xcode 12.x (or newer) for arm64
- **Debian / Ubuntu**
- Essential Build Tools:
`sudo apt install build-essential pkg-config checkinstall make ninja-build`
- One of:
- GCC 11 (or newer)
`sudo apt install gcc-11 g++-11`
- [LLVM](https://releases.llvm.org/) Clang 14 (or newer)
`sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)"`
- One of:
- ld or gold
`sudo apt install binutils`
- [LLVM](https://releases.llvm.org/) lld
`sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)"`
- [mold](https://github.com/rui314/mold)
`sudo apt install mold`
- [Qt](https://www.qt.io/) 6:
- **Windows**
A Node.JS based tool is provided toread and parse the `/third-party/obs-studio/buildspec.json` file. See `/.github/workflows/main.yml` on usage and output parsing.
- **MacOS**
A Node.JS based tool is provided toread and parse the `/third-party/obs-studio/buildspec.json` file. See `/.github/workflows/main.yml` on usage and output parsing.
- **Debian / Ubuntu:**
`sudo apt install qt6-base-dev qt6-base-private-dev libqt6svg6-dev`
- [CURL](https://curl.se/):
- **Windows**
A Node.JS based tool is provided toread and parse the `/third-party/obs-studio/buildspec.json` file. See `/.github/workflows/main.yml` on usage and output parsing.
- **MacOS**
A Node.JS based tool is provided toread and parse the `/third-party/obs-studio/buildspec.json` file. See `/.github/workflows/main.yml` on usage and output parsing.
- **Debian / Ubuntu:**
`sudo apt install libcurl4-openssl-dev`
- [FFmpeg](https://ffmpeg.org/) (Optional, for FFmpeg component only):
- **Windows**
A Node.JS based tool is provided toread and parse the `/third-party/obs-studio/buildspec.json` file. See `/.github/workflows/main.yml` on usage and output parsing.
- **MacOS**
A Node.JS based tool is provided toread and parse the `/third-party/obs-studio/buildspec.json` file. See `/.github/workflows/main.yml` on usage and output parsing.
- **Debian / Ubuntu**
`sudo apt install libavcodec-dev libavdevice-dev libavfilter-dev libavformat-dev libavutil-dev libswresample-dev libswscale-dev`
- [LLVM](https://releases.llvm.org/) (Optional, for clang-format and clang-tidy integration only):
- **Debian / Ubuntu**
`sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" all`
- [InnoSetup](https://jrsoftware.org/isinfo.php) (Optional, for **Windows** installer only)
### Cloning the Project
Using your preferred tool of choice for git, clone the repository including all submodules into a directory. If you use git directly, then you can clone the entire project with `git clone --recursive https://github.com/Xaymar/obs-StreamFX.git streamfx`.
### Configuring with CMake
There are two ways to handle this step, with the GUI variant of CMake and with the command line version of CMake. This guide will focus on the GUI variant, but all the steps below can be done with the command line version as well.
1. Launch CMake-GUI and wait for it to open.
2. Click the button named `Browse Build` and point it at an empty folder. For example, create a folder in the project called `build` and select that folder.
3. Click the button named `Browse Source` and point it at the project itself.
4. Click the button named `Configure`, select your preferred Generator (the default is usually fine), and wait for it to complete. This will most likely result in an error which is expected.
5. Adjust the variables in the variable list as necessary. Take a look at [the documentation](#CMake-Options) for what each option does.
6. Click the button named `Generate`, which will also run `Configure`. Both together should succeed if you did everything correctly.
7. If available, you can now click the button named `Open Project` to immediately jump into your IDE of choice.
</details>
## CMake Options
<details><summary>The project is intended to be versatile and configurable, so we offer almost everything to be configured on a silver platter directly in CMake (if possible). If StreamFX detects that it is being built together with other projects, it will automatically prefix all options with `StreamFX_` to prevent collisions.</summary>
### Generic
- `GIT` (not prefixed)
Path to the `git` binary on your system, for use with features that require git during configuration and generation.
- `VERSION`
Set or override the version of the project with a custom one. Allowed formats are: SemVer 2.0.0, CMake.
### Code
- `ENABLE_CLANG`
Enable integration of `clang-format` and `clang-tidy`
- `CLANG_PATH` (not prefixed, only with `ENABLE_CLANG`)
Path to the `clang` installation containing `clang-format` and `clang-tidy`. Only used as a hint.
- `CLANG_FORMAT_PATH` and `CLANG_TIDY_PATH` (not prefixed)
Path to `clang-format` and `clang-tidy` that will be used.
### Dependencies
- `LibObs_DIR`
Path to the obs-studio libraries.
- `Qt5_DIR`, `Qt6_DIR` or `Qt_DIR` (autodetect)
Path to Qt5 (OBS Studio 27.x and lower) or Qt6 (OBS Studio 28.x and higher).
- `FFmpeg_DIR`
Path to compatible FFmpeg libraries and headers.
- `CURL_DIR`
Path to compatible CURL libraries and headers.
- `AOM_DIR`
Path to compatible AOM libraries and headers.
### Compiling
- `ENABLE_FASTMATH`
Enable fast math optimizations if the compiler supports them. This trades precision for performance, and is usually good enough anyway.
- `ENABLE_LTO`
Enable link time optimization for faster binaries in exchange for longer build times.
- `ENABLE_PROFILING`
Enable CPU and GPU profiling code, this option reduces performance drastically.
- `TARGET_*`
Specify which architecture target the generated binaries will use.
### Components
- `COMPONENT_<NAME>`
Enable the component by the given name.
### Installing & Packaging
These options are only available in CI-Style mode.
- `CMAKE_INSTALL_PREFIX`
The path in which installed content should be placed when building the `install` target.
- `STRUCTURE_PACKAGEMANAGER`
If enabled will install files in a layout compatible with package managers.
- `STRUCTURE_UNIFIED`
Enable to install files in a layout compatible with an OBS Studio plugin manager.
- `PACKAGE_NAME`
The name of the packaged archive, excluding the prefix, suffix and extension.
- `PACKAGE_PREFIX`
The path in which the packages should be placed.
- `PACKAGE_SUFFIX`
The suffix to attach to the name, before the file extension. If left blank will attach the current version string to the package.
- `STRUCTURE_UNIFIED`
Enable to replace the PACKAGE_ZIP target with a target that generates a single `.obs` file instead.
</details>

File diff suppressed because it is too large Load Diff

View File

@ -1,17 +1,18 @@
# Contributing
This document goes over how you (and/or your organization) are expected to contribute. These guidelines are softly enforced and sometimes not required.
This document intends to teach you the proper way to contribute to the project as a set of guidelines. While they aren't always enforced, your chances of your code being accepted are significantly higher when you follow these. For smaller changes, we might opt to squash your changes to apply the guidelines below to your contribution.
## Localization
We use Crowdin to handle translations into many languages, and you can join the [StreamFX project on Crowdin](https://crowdin.com/project/obs-stream-effects) if you are interested in improving the translations to your native tongue. As Crowdin handles all other languages, Pull Requests therefore should only include changes to `en-US.ini`.
<details open><summary><h2 style="display: inline-block;">Repository & Commits</h2></summary>
## Commit Guidelines
Commits should focus on a single change such as formatting, fixing a bug, a warning across the code, and similar things. This means that you should not include a fix to color format handling in a commit that implements a new encoder, or include a fix to a bug with a fix to a warning.
As this is a rather large project, we have certain rules to follow when contributing via git.
### Linear History
This project prefers the linear history of `git rebase` and forbids merge commits. This allows all branches to be a single line back to the root, unless viewed as a whole where it becomes a tree. If you are working on a branch for a feature, bug or other thing, you should know how to rebase back onto the main branch before making a pull request.
We follow the paradigm of linear history which forbids branches from being merged, thus changes made on branches are `git rebase`d back onto the root. This simplifies the code history significantly, but makes reverting changes more difficult.
### Commit Message & Title
We require a commit message format like this:
`git merge`
`git rebase`
### Commits
A commit should be containing a single change, even if it spans multiple units, and has the following format:
```
prefix: short description
@ -19,35 +20,60 @@ prefix: short description
optional long description
```
The `short description` should be no longer than 80 characters, excluding the `prefix: ` part. The `optional long description` should be present if the change is not immediately obvious - however it does not replace proper documentation.
The short description should be no longer than 120 characters and focus on the important things. The long description is optional, but should be included for larger changes.
#### The correct `prefix`
Depending on where the file is that you ended up modifying, or if you modified multiple files at once, the prefix changes. Take a look at the list to understand which directories cause which prefix:
#### The appropriate `prefix`
- `/CMakeLists.txt`, `/cmake` -> `cmake`
- `/.github/workflows` -> `ci`
- `/data/locale`, `/crowdin.yml` -> `locale`
- `/data/examples` -> `examples`
- `/data` -> `data` (if not part of another prefix)
- `/media` -> `media`
- `/source`, `/include` -> `code`
- `/templates` -> `templates` (or merge with `cmake`)
- `/third-party` -> `third-party`
- `/patches` -> `patches`
- `/tools` -> `tools`
- `/ui` -> `ui` (if not part of a `code` change)
- Most other files -> `project`
<table>
<tr>
<th>Path(s)</th>
<th>Prefix</th>
<th>Example</th>
</tr>
<tr>
<td>
data/locale
</td>
<td>locale</td>
<td>
<code>data/locale/en-US.ini</code> -> <code>locale</code>
</td>
</tr>
<tr>
<td>components/name</td>
<td>name</td>
<td>
<code>components/shader</code> -> <code>shader</code>
</td>
</tr>
<tr>
<td>
source<br>
templates<br>
data<br>
ui
</td>
<td>core</td>
<td>
<code>ui/main.ui</code> -> <code>core</code>
</td>
</tr>
<tr>
<td>Anything else</td>
<td><b>Omit the prefix</b></td>
<td></td>
</tr>
</table>
If multiple locations match, they should be alphabetically sorted and separated by `, `. A change to both `ui` and `code` will as such result in a prefix of `code, ui`. If a `code` change only affects a single file, or multiple files with a common parent file, the prefix should be the path of the file, like shown in the following examples:
If multiple match, apply the prefix that changes the most files. If all are equal, alphabetically sort the prefixes and list comma separated.
- `/source/encoders/encoder-ffmpeg` -> `encoder/ffmpeg`
- `/source/filters/filter-shader` -> `filter/shader`
- `/source/encoders/handlers/handler`, `/source/encoders/encoder-ffmpeg` -> `encoder/ffmpeg`
</details>
## Coding Guidelines
<details open><summary><h2 style="display: inline-block;">Coding</h2></summary>
### Documentation
Documentation should be present in areas where it would save time to new developers, and in areas where an API is defined. This means that you should not provide documentation for things like `1 + 1`, but for things like the following:
The short form of the this part is **Code != Documentation**. Documentation is what you intend your Code to do, while Code is what it actually does. If your Code mismatches the Documentation, it is time to fix the Code, unless the change is a new addition in terms of behavior or functionality. Note that by this we don't mean to document things like `1 + 1` but instead things like the following:
```c++
int32_t idepth = static_cast<int32_t>(depth);
@ -58,14 +84,18 @@ int32_t container_size = static_cast<int32_t>(pow(2l, (idepth + (idepth / 2))));
```c++
class magic_class {
void do_magic_thing(float magic_number);
void do_magic_thing(float magic_number) {
// Lots and lots of SIMD code that does a magic thing...
}
}
```
Both of these examples would be much easier to understand if they had proper documentation, and save hours if not even days of delving into code. Documentation is about saving time to new developers, and can't be replaced by code. Code is not Documentation!
Documenting what a block of Code does not only helps you, it also helps other contributors understand what this Code is supposed to do. While you may be able to read your own Code (at least for now), there is no guarantee that either you or someone else will be able to read it in the future. Not only that, but it makes spotting mistakes and fixing them easier, since we have Documentation to tell us what it is supposed to do!
### Naming & Casing
All long-term objects should have a descriptive name, which can be used by other developers to know what it is for. Temporary objects should also have some information, but do not necessarily follow the same rules.
The project isn't too strict about variable naming as well as casing, but we do prefer a universal style across all code. While this may appear as removing your individuality from the code, it ultimately serves the purpose of making it easier to jump from one block of code to the other, without having to guess at what this code now does.
Additionally we prefer it when things are named by what they either do or what they contain, instead of having the entire alphabet spelled out in different arrangements. While it is fine to have chaos in your own Code for your private or hobby projects, it is not fine to submit such code to other projects.
#### Macros
- Casing: ELEPHANT_CASE
@ -249,6 +279,16 @@ Special rules for `class`
#### Members
All class members must be `private` and only accessible through get-/setters. The setter of a member should also validate if the setting is within an allowed range, and throw exceptions if an error occurs. If there is no better option, it is allowed to delay validation until a common function is called.
## Building
Please read [the guide on the wiki](https://github.com/Xaymar/obs-StreamFX/wiki/Building) for building the project.
</details>
<details open><summary><h2 style="display: inline-block;">Localization</h2></summary>
We use Crowdin to handle translations into many languages, and you can join the [StreamFX project on Crowdin](https://crowdin.com/project/obs-stream-effects) if you are interested in improving the translations to your native tongue. As Crowdin handles all other languages, Pull Requests therefore should only include changes to `en-US.ini`.
</details>
## Further Resources
- A guide on how to build the project is in BUILDING.MD.
- A no bullshit guide to `git`: https://rogerdudler.github.io/git-guide/
- Remember, `git` has help pages for all commands - run `git <command> --help`.
- ... or use visual clients, like TortoiseGit, Github Desktop, SourceTree, and similar. It's what I do.

57
README.adoc Normal file
View File

@ -0,0 +1,57 @@
== image:https://raw.githubusercontent.com/Xaymar/obs-StreamFX/master/media/logo.png[alt="StreamFX"]
Upgrade your setup with several modern sources, filters, transitions and encoders using StreamFX! With several performant and flexible features, you will discover new ways to build your scenes, better ways to encode your content, and take your stream to the next level. Create cool new scenes with 3D effects, add glow or shadow, or blur out content - endless choices, and all of it at your fingertips.
++++
<p style="text-align: center; font-weight: bold; font-size: 1.5em;">
<a href="https://github.com/Xaymar/obs-StreamFX/wiki">More Information</a><br/>
<a href="https://github.com/Xaymar/obs-StreamFX/actions"><img src="https://github.com/Xaymar/obs-StreamFX/actions/workflows/main.yml/badge.svg" alt="CI Status" /></a>
<a href="https://crowdin.com/project/obs-stream-effects"><img src="https://badges.crowdin.net/obs-stream-effects/localized.svg" alt="Crowdin Status" /></a>
</p>
++++
=== Support the development of StreamFX!
++++
<a href="https://patreon.com/join/xaymar" target="_blank">
<img height="70px" alt="Patreon" style="height: 70px; float:right;" align="right" src="https://user-images.githubusercontent.com/437395/106462708-bd602980-6496-11eb-8f35-038577cf8fd7.jpg"/>
</a>
++++
Maintaining a project like StreamFX requires time and money, of which both are in short supply. If you use any feature of StreamFX, please consider supporting StreamFX via link:https://patreon.com/xaymar[Patreon]. Even as little as 1€ per month matters a lot, plus you get a number of benefits!
=== License
Licensed under link:https://github.com/Xaymar/obs-StreamFX/blob/root/LICENSE[GPLv3 (or later), see LICENSE]. Additional works included are:
[options="header"]
|=================
|Work |License |Author(s)
|link:https://gen.glad.sh/[GLAD]
|link:https://github.com/Dav1dde/glad/blob/glad2/LICENSE[MIT License]
|link:https://github.com/Dav1dde/glad/graphs/contributors?type=a[Dav1dde, madebr, BtbN, and more]
|link:https://github.com/nlohmann/json[JSON for Modern C++]
|link:https://github.com/nlohmann/json/blob/develop/LICENSE.MIT[MIT License]
|link:https://github.com/nlohmann/json/graphs/contributors?type=a[nlohmann, ChrisKtiching, nickaein, and more]
|link:https://github.com/NVIDIA/MAXINE-AFX-SDK[NVIDIA Maxine Audio Effects SDK]
|link:https://github.com/NVIDIA/MAXINE-AFX-SDK/blob/master/LICENSE[MIT License]
|link:https://nvidia.com/[NVIDIA Corporation]
|link:https://github.com/NVIDIA/MAXINE-AR-SDK[NVIDIA Maxine Augmented Reality SDK]
|link:https://github.com/NVIDIA/MAXINE-Ar-SDK/blob/master/LICENSE[MIT License]
|link:https://nvidia.com/[NVIDIA Corporation]
|link:https://github.com/NVIDIA/MAXINE-VFX-SDK[NVIDIA Maxine Video Effects SDK]
|link:https://github.com/NVIDIA/MAXINE-VFX-SDK/blob/master/LICENSE[MIT License]
|link:https://nvidia.com/[NVIDIA Corporation]
|link:https://github.com/obsproject/obs-studio[Open Broadcaster Software Studio]
|link:https://github.com/obsproject/obs-studio/blob/master/COPYING[GPL-2.0 (or later)]
|link:https://github.com/obsproject/obs-studio/graphs/contributors?type=a[jp9000, computerquip, and more]
|link:https://www.qt.io/[Qt 6.x]
|link:https://www.qt.io/download-open-source[(L)GPL-3.0 (or later)]
|link:https://www.qt.io/[The Qt Company], and open source contributors
|=================

View File

@ -1,12 +0,0 @@
![StreamFX Logo](https://raw.githubusercontent.com/Xaymar/obs-StreamFX/master/media/logo.png)
# StreamFX
Bring your setup to the modern day with StreamFX! With several super fast filters, new ways to build your scenes, and new encoders you can now take your stream even further. Create cool new scenes with 3D effects, make something glow or have a shadow, or blur out content - the choice is yours!
[![CI](https://github.com/Xaymar/obs-StreamFX/actions/workflows/main.yml/badge.svg)](https://github.com/Xaymar/obs-StreamFX/actions) [![Crowdin](https://badges.crowdin.net/obs-stream-effects/localized.svg)](https://crowdin.com/project/obs-stream-effects)
# Support the development of StreamFX!
[<img align="right" alt="Patreon" src="https://user-images.githubusercontent.com/437395/106462708-bd602980-6496-11eb-8f35-038577cf8fd7.jpg" height="70px"/>](https://patreon.com/join/xaymar) Maintaining a project like StreamFX requires time and money, of which both are in short supply. If you use any feature of StreamFX, please consider supporting StreamFX via [Patreon](https://patreon.com/xaymar). Even as little as 1€ per month matters a lot, plus you get a number of benefits!
## Further Links
* [Wiki](https://github.com/Xaymar/obs-StreamFX/wiki)
* [Installation Guide](https://github.com/xaymar/obs-streamfx/wiki/Installation)

View File

@ -0,0 +1,24 @@
# AUTOGENERATED COPYRIGHT HEADER START
# Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
# AUTOGENERATED COPYRIGHT HEADER END
cmake_minimum_required(VERSION 3.26)
project("AutoFraming")
list(APPEND CMAKE_MESSAGE_INDENT "[${PROJECT_NAME}] ")
streamfx_add_component("Auto-Framing"
RESOLVER streamfx_auto_framing_resolver
)
streamfx_add_component_dependency("NVIDIA" OPTIONAL)
function(streamfx_auto_framing_resolver)
# Providers
#- NVIDIA
streamfx_enabled_component("NVIDIA" T_CHECK)
if(T_CHECK)
target_compile_definitions(${COMPONENT_TARGET} PRIVATE
PRIVATE ENABLE_NVIDIA
)
endif()
endfunction()

View File

@ -146,7 +146,7 @@ autoframing_instance::~autoframing_instance()
// TODO: Make this asynchronous.
switch (_provider) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case tracking_provider::NVIDIA_FACEDETECTION:
nvar_facedetection_unload();
break;
@ -358,7 +358,7 @@ void autoframing_instance::update(obs_data_t* data)
std::unique_lock<std::mutex> ul(_provider_lock);
switch (_provider) {
#ifdef ENABLE_FILTER_UPSCALING_NVIDIA
#ifdef ENABLE_NVIDIA
case tracking_provider::NVIDIA_FACEDETECTION:
nvar_facedetection_update(data);
break;
@ -375,7 +375,7 @@ void autoframing_instance::update(obs_data_t* data)
void streamfx::filter::autoframing::autoframing_instance::properties(obs_properties_t* properties)
{
switch (_provider_ui) {
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
case tracking_provider::NVIDIA_FACEDETECTION:
nvar_facedetection_properties(properties);
break;
@ -401,7 +401,7 @@ uint32_t autoframing_instance::get_height()
return std::max<uint32_t>(_out_size.second, 1);
}
void autoframing_instance::video_tick(float_t seconds)
void autoframing_instance::video_tick(float seconds)
{
auto target = obs_filter_get_target(_self);
auto width = obs_source_get_base_width(target);
@ -490,7 +490,7 @@ void autoframing_instance::video_render(gs_effect_t* effect)
std::unique_lock<std::mutex> ul(_provider_lock);
switch (_provider) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case tracking_provider::NVIDIA_FACEDETECTION:
nvar_facedetection_process();
break;
@ -864,7 +864,7 @@ void streamfx::filter::autoframing::autoframing_instance::task_switch_provider(u
try {
// Unload the previous provider.
switch (spd->provider) {
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
case tracking_provider::NVIDIA_FACEDETECTION:
nvar_facedetection_unload();
break;
@ -875,7 +875,7 @@ void streamfx::filter::autoframing::autoframing_instance::task_switch_provider(u
// Load the new provider.
switch (_provider) {
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
case tracking_provider::NVIDIA_FACEDETECTION:
nvar_facedetection_load();
break;
@ -894,7 +894,7 @@ void streamfx::filter::autoframing::autoframing_instance::task_switch_provider(u
}
}
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
void streamfx::filter::autoframing::autoframing_instance::nvar_facedetection_load()
{
_nvidia_fx = std::make_shared<::streamfx::nvidia::ar::facedetection>();
@ -1008,7 +1008,7 @@ autoframing_factory::autoframing_factory()
bool any_available = false;
// 1. Try and load any configured providers.
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
try {
// Load CVImage and Video Effects SDK.
_nvcuda = ::streamfx::nvidia::cuda::obs::get();
@ -1097,11 +1097,9 @@ obs_properties_t* autoframing_factory::get_properties2(autoframing_instance* dat
{
obs_properties_t* pr = obs_properties_create();
#ifdef ENABLE_FRONTEND
{
obs_properties_add_button2(pr, S_MANUAL_OPEN, D_TRANSLATE(S_MANUAL_OPEN), autoframing_factory::on_manual_open, nullptr);
}
#endif
{
auto grp = obs_properties_create();
@ -1144,26 +1142,26 @@ obs_properties_t* autoframing_factory::get_properties2(autoframing_instance* dat
}
{
auto grp2 = obs_properties_create();
obs_properties_add_group(grp, ST_KEY_FRAMING_PADDING, D_TRANSLATE(ST_I18N_FRAMING_PADDING), OBS_GROUP_NORMAL, grp2);
auto grp2_ = obs_properties_create();
obs_properties_add_group(grp, ST_KEY_FRAMING_PADDING, D_TRANSLATE(ST_I18N_FRAMING_PADDING), OBS_GROUP_NORMAL, grp2_);
{
auto p = obs_properties_add_text(grp2, ST_KEY_FRAMING_PADDING ".X", "X", OBS_TEXT_DEFAULT);
auto p = obs_properties_add_text(grp2_, ST_KEY_FRAMING_PADDING ".X", "X", OBS_TEXT_DEFAULT);
}
{
auto p = obs_properties_add_text(grp2, ST_KEY_FRAMING_PADDING ".Y", "Y", OBS_TEXT_DEFAULT);
auto p = obs_properties_add_text(grp2_, ST_KEY_FRAMING_PADDING ".Y", "Y", OBS_TEXT_DEFAULT);
}
}
{
auto grp2 = obs_properties_create();
obs_properties_add_group(grp, ST_KEY_FRAMING_OFFSET, D_TRANSLATE(ST_I18N_FRAMING_OFFSET), OBS_GROUP_NORMAL, grp2);
auto grp2_ = obs_properties_create();
obs_properties_add_group(grp, ST_KEY_FRAMING_OFFSET, D_TRANSLATE(ST_I18N_FRAMING_OFFSET), OBS_GROUP_NORMAL, grp2_);
{
auto p = obs_properties_add_text(grp2, ST_KEY_FRAMING_OFFSET ".X", "X", OBS_TEXT_DEFAULT);
auto p = obs_properties_add_text(grp2_, ST_KEY_FRAMING_OFFSET ".X", "X", OBS_TEXT_DEFAULT);
}
{
auto p = obs_properties_add_text(grp2, ST_KEY_FRAMING_OFFSET ".Y", "Y", OBS_TEXT_DEFAULT);
auto p = obs_properties_add_text(grp2_, ST_KEY_FRAMING_OFFSET ".Y", "Y", OBS_TEXT_DEFAULT);
}
}
@ -1213,7 +1211,7 @@ obs_properties_t* autoframing_factory::get_properties2(autoframing_instance* dat
auto p = obs_properties_add_list(grp, ST_KEY_ADVANCED_PROVIDER, D_TRANSLATE(ST_I18N_ADVANCED_PROVIDER), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_INT);
obs_property_set_modified_callback(p, modified_provider);
obs_property_list_add_int(p, D_TRANSLATE(S_STATE_AUTOMATIC), static_cast<int64_t>(tracking_provider::AUTOMATIC));
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
obs_property_list_add_int(p, D_TRANSLATE(ST_I18N_ADVANCED_PROVIDER_NVIDIA_FACEDETECTION), static_cast<int64_t>(tracking_provider::NVIDIA_FACEDETECTION));
#endif
}
@ -1224,18 +1222,16 @@ obs_properties_t* autoframing_factory::get_properties2(autoframing_instance* dat
return pr;
}
#ifdef ENABLE_FRONTEND
bool streamfx::filter::autoframing::autoframing_factory::on_manual_open(obs_properties_t* props, obs_property_t* property, void* data)
{
streamfx::open_url(HELP_URL);
return false;
}
#endif
bool streamfx::filter::autoframing::autoframing_factory::is_provider_available(tracking_provider provider)
{
switch (provider) {
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
case tracking_provider::NVIDIA_FACEDETECTION:
return _nvidia_available;
#endif

View File

@ -19,7 +19,7 @@
#include <mutex>
#include "warning-enable.hpp"
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
#include "nvidia/ar/nvidia-ar-facedetection.hpp"
#endif
@ -81,7 +81,7 @@ namespace streamfx::filter::autoframing {
std::mutex _provider_lock;
std::shared_ptr<util::threadpool::task> _provider_task;
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
std::shared_ptr<::streamfx::nvidia::ar::facedetection> _nvidia_fx;
#endif
@ -126,7 +126,7 @@ namespace streamfx::filter::autoframing {
uint32_t get_width() override;
uint32_t get_height() override;
virtual void video_tick(float_t seconds) override;
virtual void video_tick(float seconds) override;
virtual void video_render(gs_effect_t* effect) override;
private:
@ -135,7 +135,7 @@ namespace streamfx::filter::autoframing {
void switch_provider(tracking_provider provider);
void task_switch_provider(util::threadpool::task_data_t data);
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
void nvar_facedetection_load();
void nvar_facedetection_unload();
void nvar_facedetection_process();
@ -145,7 +145,7 @@ namespace streamfx::filter::autoframing {
};
class autoframing_factory : public obs::source_factory<streamfx::filter::autoframing::autoframing_factory, streamfx::filter::autoframing::autoframing_instance> {
#ifdef ENABLE_FILTER_AUTOFRAMING_NVIDIA
#ifdef ENABLE_NVIDIA
bool _nvidia_available;
std::shared_ptr<::streamfx::nvidia::cuda::obs> _nvcuda;
std::shared_ptr<::streamfx::nvidia::cv::cv> _nvcvi;
@ -161,9 +161,7 @@ namespace streamfx::filter::autoframing {
void get_defaults2(obs_data_t* data) override;
obs_properties_t* get_properties2(autoframing_instance* data) override;
#ifdef ENABLE_FRONTEND
static bool on_manual_open(obs_properties_t* props, obs_property_t* property, void* data);
#endif
bool is_provider_available(tracking_provider);
tracking_provider find_ideal_provider();

View File

@ -0,0 +1,9 @@
# AUTOGENERATED COPYRIGHT HEADER START
# Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
# AUTOGENERATED COPYRIGHT HEADER END
cmake_minimum_required(VERSION 3.26)
project("Blur")
list(APPEND CMAKE_MESSAGE_INDENT "[${PROJECT_NAME}] ")
streamfx_add_component("Blur")

View File

@ -1,5 +1,6 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2019-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2017-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2019 Cat Stevens <catb0t@protonmail.ch>
// AUTOGENERATED COPYRIGHT HEADER END
#include "filter-blur.hpp"
@ -290,12 +291,12 @@ void blur_instance::update(obs_data_t* settings)
_mask.type = static_cast<mask_type>(obs_data_get_int(settings, ST_KEY_MASK_TYPE));
switch (_mask.type) {
case mask_type::Region:
_mask.region.left = float_t(obs_data_get_double(settings, ST_KEY_MASK_REGION_LEFT) / 100.0);
_mask.region.top = float_t(obs_data_get_double(settings, ST_KEY_MASK_REGION_TOP) / 100.0);
_mask.region.right = 1.0f - float_t(obs_data_get_double(settings, ST_KEY_MASK_REGION_RIGHT) / 100.0);
_mask.region.bottom = 1.0f - float_t(obs_data_get_double(settings, ST_KEY_MASK_REGION_BOTTOM) / 100.0);
_mask.region.feather = float_t(obs_data_get_double(settings, ST_KEY_MASK_REGION_FEATHER) / 100.0);
_mask.region.feather_shift = float_t(obs_data_get_double(settings, ST_KEY_MASK_REGION_FEATHER_SHIFT) / 100.0);
_mask.region.left = float(obs_data_get_double(settings, ST_KEY_MASK_REGION_LEFT) / 100.0);
_mask.region.top = float(obs_data_get_double(settings, ST_KEY_MASK_REGION_TOP) / 100.0);
_mask.region.right = 1.0f - float(obs_data_get_double(settings, ST_KEY_MASK_REGION_RIGHT) / 100.0);
_mask.region.bottom = 1.0f - float(obs_data_get_double(settings, ST_KEY_MASK_REGION_BOTTOM) / 100.0);
_mask.region.feather = float(obs_data_get_double(settings, ST_KEY_MASK_REGION_FEATHER) / 100.0);
_mask.region.feather_shift = float(obs_data_get_double(settings, ST_KEY_MASK_REGION_FEATHER_SHIFT) / 100.0);
_mask.region.invert = obs_data_get_bool(settings, ST_KEY_MASK_REGION_INVERT);
break;
case mask_type::Image:
@ -310,8 +311,8 @@ void blur_instance::update(obs_data_t* settings)
_mask.color.r = static_cast<float>((color >> 0) & 0xFF) / 255.0f;
_mask.color.g = static_cast<float>((color >> 8) & 0xFF) / 255.0f;
_mask.color.b = static_cast<float>((color >> 16) & 0xFF) / 255.0f;
_mask.color.a = static_cast<float_t>(obs_data_get_double(settings, ST_KEY_MASK_ALPHA));
_mask.multiplier = float_t(obs_data_get_double(settings, ST_KEY_MASK_MULTIPLIER));
_mask.color.a = static_cast<float>(obs_data_get_double(settings, ST_KEY_MASK_ALPHA));
_mask.multiplier = float(obs_data_get_double(settings, ST_KEY_MASK_MULTIPLIER));
}
}
}
@ -464,7 +465,7 @@ void blur_instance::video_render(gs_effect_t* effect)
std::string technique = "";
switch (this->_mask.type) {
case mask_type::Region:
if (this->_mask.region.feather > std::numeric_limits<float_t>::epsilon()) {
if (this->_mask.region.feather > std::numeric_limits<float>::epsilon()) {
if (this->_mask.region.invert) {
technique = "RegionFeatherInverted";
} else {
@ -744,11 +745,9 @@ obs_properties_t* blur_factory::get_properties2(blur_instance* data)
obs_properties_t* pr = obs_properties_create();
obs_property_t* p = NULL;
#ifdef ENABLE_FRONTEND
{
obs_properties_add_button2(pr, S_MANUAL_OPEN, D_TRANSLATE(S_MANUAL_OPEN), streamfx::filter::blur::blur_factory::on_manual_open, nullptr);
}
#endif
// Blur Type and Sub-Type
{
@ -839,7 +838,6 @@ std::string blur_factory::translate_string(const char* format, ...)
return std::string(buffer.data(), buffer.data() + len);
}
#ifdef ENABLE_FRONTEND
bool blur_factory::on_manual_open(obs_properties_t* props, obs_property_t* property, void* data)
{
try {
@ -853,7 +851,6 @@ bool blur_factory::on_manual_open(obs_properties_t* props, obs_property_t* prope
return false;
}
}
#endif
std::shared_ptr<blur_factory> blur_factory::instance()
{

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2019-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2017-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once
@ -55,12 +55,12 @@ namespace streamfx::filter::blur {
bool enabled;
mask_type type;
struct {
float_t left;
float_t top;
float_t right;
float_t bottom;
float_t feather;
float_t feather_shift;
float left;
float top;
float right;
float bottom;
float feather;
float feather_shift;
bool invert;
} region;
struct {
@ -76,12 +76,12 @@ namespace streamfx::filter::blur {
std::shared_ptr<streamfx::obs::gs::texture> texture;
} source;
struct {
float_t r;
float_t g;
float_t b;
float_t a;
float r;
float g;
float b;
float a;
} color;
float_t multiplier;
float multiplier;
} _mask;
public:
@ -93,7 +93,7 @@ namespace streamfx::filter::blur {
virtual void migrate(obs_data_t* settings, uint64_t version) override;
virtual void update(obs_data_t* settings) override;
virtual void video_tick(float_t time) override;
virtual void video_tick(float time) override;
virtual void video_render(gs_effect_t* effect) override;
private:
@ -115,9 +115,7 @@ namespace streamfx::filter::blur {
std::string translate_string(const char* format, ...);
#ifdef ENABLE_FRONTEND
static bool on_manual_open(obs_properties_t* props, obs_property_t* property, void* data);
#endif
public: // Singleton
static std::shared_ptr<blur_factory> instance();

View File

@ -236,8 +236,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_linear::r
auto gdmp = streamfx::obs::gs::debug_marker(streamfx::obs::gs::debug_color_azure_radiance, "Box Linear Blur");
#endif
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
gs_set_cull_mode(GS_NEITHER);
gs_enable_color(true, true, true, true);
@ -257,10 +257,10 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_linear::r
if (effect) {
// Pass 1
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1.f / width), 0.f);
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size));
effect.get_parameter("pSizeInverseMul").set_float(float_t(1.0f / (float_t(_size) * 2.0f + 1.0f)));
effect.get_parameter("pImageTexel").set_float2(float(1.f / width), 0.f);
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size));
effect.get_parameter("pSizeInverseMul").set_float(float(1.0f / (float(_size) * 2.0f + 1.0f)));
{
#if defined(ENABLE_PROFILING) && !defined(D_PLATFORM_MAC) && _DEBUG
@ -276,7 +276,7 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_linear::r
// Pass 2
effect.get_parameter("pImage").set_texture(_rendertarget2->get_texture());
effect.get_parameter("pImageTexel").set_float2(0., float_t(1.f / height));
effect.get_parameter("pImageTexel").set_float2(0., float(1.f / height));
{
#if defined(ENABLE_PROFILING) && !defined(D_PLATFORM_MAC) && _DEBUG
@ -326,8 +326,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_linear_di
auto gdmp = streamfx::obs::gs::debug_marker(streamfx::obs::gs::debug_color_azure_radiance, "Box Linear Directional Blur");
#endif
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
gs_blend_state_push();
gs_reset_blend_state();
@ -346,10 +346,10 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_linear_di
streamfx::obs::gs::effect effect = _data->get_effect();
if (effect) {
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1. / width * cos(_angle)), float_t(1.f / height * sin(_angle)));
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size));
effect.get_parameter("pSizeInverseMul").set_float(float_t(1.0f / (float_t(_size) * 2.0f + 1.0f)));
effect.get_parameter("pImageTexel").set_float2(float(1. / width * cos(_angle)), float(1.f / height * sin(_angle)));
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size));
effect.get_parameter("pSizeInverseMul").set_float(float(1.0f / (float(_size) * 2.0f + 1.0f)));
{
auto op = _rendertarget->render(uint32_t(width), uint32_t(height));

View File

@ -246,8 +246,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box::render()
auto gdmp = streamfx::obs::gs::debug_marker(streamfx::obs::gs::debug_color_azure_radiance, "Box Blur");
#endif
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
gs_set_cull_mode(GS_NEITHER);
gs_enable_color(true, true, true, true);
@ -267,10 +267,10 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box::render()
if (effect) {
// Pass 1
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1.f / width), 0.f);
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size));
effect.get_parameter("pSizeInverseMul").set_float(float_t(1.0f / (float_t(_size) * 2.0f + 1.0f)));
effect.get_parameter("pImageTexel").set_float2(float(1.f / width), 0.f);
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size));
effect.get_parameter("pSizeInverseMul").set_float(float(1.0f / (float(_size) * 2.0f + 1.0f)));
{
#if defined(ENABLE_PROFILING) && !defined(D_PLATFORM_MAC) && _DEBUG
@ -286,7 +286,7 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box::render()
// Pass 2
effect.get_parameter("pImage").set_texture(_rendertarget2->get_texture());
effect.get_parameter("pImageTexel").set_float2(0.f, float_t(1.f / height));
effect.get_parameter("pImageTexel").set_float2(0.f, float(1.f / height));
{
#if defined(ENABLE_PROFILING) && !defined(D_PLATFORM_MAC) && _DEBUG
@ -336,8 +336,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_direction
auto gdmp = streamfx::obs::gs::debug_marker(streamfx::obs::gs::debug_color_azure_radiance, "Box Directional Blur");
#endif
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
gs_blend_state_push();
gs_reset_blend_state();
@ -356,10 +356,10 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_direction
streamfx::obs::gs::effect effect = _data->get_effect();
if (effect) {
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1. / width * cos(_angle)), float_t(1.f / height * sin(_angle)));
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size));
effect.get_parameter("pSizeInverseMul").set_float(float_t(1.0f / (float_t(_size) * 2.0f + 1.0f)));
effect.get_parameter("pImageTexel").set_float2(float(1. / width * cos(_angle)), float(1.f / height * sin(_angle)));
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size));
effect.get_parameter("pSizeInverseMul").set_float(float(1.0f / (float(_size) * 2.0f + 1.0f)));
{
auto op = _rendertarget->render(uint32_t(width), uint32_t(height));
@ -410,8 +410,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_rotationa
auto gdmp = streamfx::obs::gs::debug_marker(streamfx::obs::gs::debug_color_azure_radiance, "Box Rotational Blur");
#endif
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
gs_blend_state_push();
gs_reset_blend_state();
@ -430,12 +430,12 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_rotationa
streamfx::obs::gs::effect effect = _data->get_effect();
if (effect) {
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1.f / width), float_t(1.f / height));
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size));
effect.get_parameter("pSizeInverseMul").set_float(float_t(1.0f / (float_t(_size) * 2.0f + 1.0f)));
effect.get_parameter("pAngle").set_float(float_t(_angle / _size));
effect.get_parameter("pCenter").set_float2(float_t(_center.first), float_t(_center.second));
effect.get_parameter("pImageTexel").set_float2(float(1.f / width), float(1.f / height));
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size));
effect.get_parameter("pSizeInverseMul").set_float(float(1.0f / (float(_size) * 2.0f + 1.0f)));
effect.get_parameter("pAngle").set_float(float(_angle / _size));
effect.get_parameter("pCenter").set_float2(float(_center.first), float(_center.second));
{
auto op = _rendertarget->render(uint32_t(width), uint32_t(height));
@ -476,8 +476,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_zoom::ren
auto gdmp = streamfx::obs::gs::debug_marker(streamfx::obs::gs::debug_color_azure_radiance, "Box Zoom Blur");
#endif
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
gs_blend_state_push();
gs_reset_blend_state();
@ -496,11 +496,11 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::box_zoom::ren
streamfx::obs::gs::effect effect = _data->get_effect();
if (effect) {
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1.f / width), float_t(1.f / height));
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size));
effect.get_parameter("pSizeInverseMul").set_float(float_t(1.0f / (float_t(_size) * 2.0f + 1.0f)));
effect.get_parameter("pCenter").set_float2(float_t(_center.first), float_t(_center.second));
effect.get_parameter("pImageTexel").set_float2(float(1.f / width), float(1.f / height));
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size));
effect.get_parameter("pSizeInverseMul").set_float(float(1.0f / (float(_size) * 2.0f + 1.0f)));
effect.get_parameter("pCenter").set_float2(float(_center.first), float(_center.second));
{
auto op = _rendertarget->render(uint32_t(width), uint32_t(height));

View File

@ -40,7 +40,7 @@ streamfx::gfx::blur::gaussian_linear_data::gaussian_linear_data() : _gfx_util(::
// Precalculate Kernels
for (std::size_t kernel_size = 1; kernel_size <= ST_MAX_BLUR_SIZE; kernel_size++) {
std::vector<double_t> kernel_math(ST_MAX_KERNEL_SIZE);
std::vector<float_t> kernel_data(ST_MAX_KERNEL_SIZE);
std::vector<float> kernel_data(ST_MAX_KERNEL_SIZE);
double_t actual_width = 1.;
// Find actual kernel width.
@ -61,7 +61,7 @@ streamfx::gfx::blur::gaussian_linear_data::gaussian_linear_data() : _gfx_util(::
// Normalize to fill the entire 0..1 range over the width.
double_t inverse_sum = 1.0 / sum;
for (std::size_t p = 0; p <= kernel_size; p++) {
kernel_data.at(p) = float_t(kernel_math[p] * inverse_sum);
kernel_data.at(p) = float(kernel_math[p] * inverse_sum);
}
_kernels.push_back(std::move(kernel_data));
@ -78,7 +78,7 @@ streamfx::obs::gs::effect streamfx::gfx::blur::gaussian_linear_data::get_effect(
return _effect;
}
std::vector<float_t> const& streamfx::gfx::blur::gaussian_linear_data::get_kernel(std::size_t width)
std::vector<float> const& streamfx::gfx::blur::gaussian_linear_data::get_kernel(std::size_t width)
{
if (width < 1)
width = 1;
@ -292,8 +292,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_line
return _input_texture;
}
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
// Setup
gs_set_cull_mode(GS_NEITHER);
@ -310,13 +310,13 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_line
gs_stencil_op(GS_STENCIL_BOTH, GS_ZERO, GS_ZERO, GS_ZERO);
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size));
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size));
effect.get_parameter("pKernel").set_value(kernel.data(), ST_MAX_KERNEL_SIZE);
// First Pass
if (_step_scale.first > std::numeric_limits<double_t>::epsilon()) {
effect.get_parameter("pImageTexel").set_float2(float_t(1.f / width), 0.f);
effect.get_parameter("pImageTexel").set_float2(float(1.f / width), 0.f);
{
#if defined(ENABLE_PROFILING) && !defined(D_PLATFORM_MAC) && _DEBUG
@ -336,7 +336,7 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_line
// Second Pass
if (_step_scale.second > std::numeric_limits<double_t>::epsilon()) {
effect.get_parameter("pImageTexel").set_float2(0.f, float_t(1.f / height));
effect.get_parameter("pImageTexel").set_float2(0.f, float(1.f / height));
{
#if defined(ENABLE_PROFILING) && !defined(D_PLATFORM_MAC) && _DEBUG
@ -397,8 +397,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_line
return _input_texture;
}
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
// Setup
gs_set_cull_mode(GS_NEITHER);
@ -415,9 +415,9 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_line
gs_stencil_op(GS_STENCIL_BOTH, GS_ZERO, GS_ZERO, GS_ZERO);
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1.f / width * cos(_angle)), float_t(1.f / height * sin(_angle)));
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size));
effect.get_parameter("pImageTexel").set_float2(float(1.f / width * cos(_angle)), float(1.f / height * sin(_angle)));
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size));
effect.get_parameter("pKernel").set_value(kernel.data(), ST_MAX_KERNEL_SIZE);
// First Pass

View File

@ -20,7 +20,7 @@ namespace streamfx::gfx {
class gaussian_linear_data {
streamfx::obs::gs::effect _effect;
std::shared_ptr<streamfx::gfx::util> _gfx_util;
std::vector<std::vector<float_t>> _kernels;
std::vector<std::vector<float>> _kernels;
public:
gaussian_linear_data();
@ -30,7 +30,7 @@ namespace streamfx::gfx {
streamfx::obs::gs::effect get_effect();
std::vector<float_t> const& get_kernel(std::size_t width);
std::vector<float> const& get_kernel(std::size_t width);
};
class gaussian_linear_factory : public ::streamfx::gfx::blur::ifactory {

View File

@ -108,7 +108,7 @@ std::shared_ptr<streamfx::gfx::util> streamfx::gfx::blur::gaussian_data::get_gfx
return _gfx_util;
}
std::vector<float_t> const& streamfx::gfx::blur::gaussian_data::get_kernel(std::size_t width)
std::vector<float> const& streamfx::gfx::blur::gaussian_data::get_kernel(std::size_t width)
{
width = std::clamp<size_t>(width, 1, ST_MAX_BLUR_SIZE);
return _kernels.at(width);
@ -321,8 +321,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian::ren
}
auto kernel = _data->get_kernel(size_t(_size));
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
// Setup
gs_set_cull_mode(GS_NEITHER);
@ -338,14 +338,14 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian::ren
gs_stencil_function(GS_STENCIL_BOTH, GS_ALWAYS);
gs_stencil_op(GS_STENCIL_BOTH, GS_ZERO, GS_ZERO, GS_ZERO);
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size * ST_OVERSAMPLE_MULTIPLIER));
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size * ST_OVERSAMPLE_MULTIPLIER));
effect.get_parameter("pKernel").set_value(kernel.data(), ST_KERNEL_SIZE);
// First Pass
if (_step_scale.first > std::numeric_limits<double_t>::epsilon()) {
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1.f / width), 0.f);
effect.get_parameter("pImageTexel").set_float2(float(1.f / width), 0.f);
{
#if defined(ENABLE_PROFILING) && !defined(D_PLATFORM_MAC) && _DEBUG
@ -365,7 +365,7 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian::ren
// Second Pass
if (_step_scale.second > std::numeric_limits<double_t>::epsilon()) {
effect.get_parameter("pImage").set_texture(_rendertarget->get_texture());
effect.get_parameter("pImageTexel").set_float2(0.f, float_t(1.f / height));
effect.get_parameter("pImageTexel").set_float2(0.f, float(1.f / height));
{
#if defined(ENABLE_PROFILING) && !defined(D_PLATFORM_MAC) && _DEBUG
@ -426,8 +426,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_dire
}
auto kernel = _data->get_kernel(size_t(_size));
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
// Setup
gs_set_cull_mode(GS_NEITHER);
@ -444,9 +444,9 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_dire
gs_stencil_op(GS_STENCIL_BOTH, GS_ZERO, GS_ZERO, GS_ZERO);
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1.f / width * cos(m_angle)), float_t(1.f / height * sin(m_angle)));
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size * ST_OVERSAMPLE_MULTIPLIER));
effect.get_parameter("pImageTexel").set_float2(float(1.f / width * cos(m_angle)), float(1.f / height * sin(m_angle)));
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size * ST_OVERSAMPLE_MULTIPLIER));
effect.get_parameter("pKernel").set_value(kernel.data(), ST_KERNEL_SIZE);
{
@ -482,8 +482,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_rota
}
auto kernel = _data->get_kernel(size_t(_size));
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
// Setup
gs_set_cull_mode(GS_NEITHER);
@ -500,11 +500,11 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_rota
gs_stencil_op(GS_STENCIL_BOTH, GS_ZERO, GS_ZERO, GS_ZERO);
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1.f / width), float_t(1.f / height));
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size * ST_OVERSAMPLE_MULTIPLIER));
effect.get_parameter("pAngle").set_float(float_t(m_angle / _size));
effect.get_parameter("pCenter").set_float2(float_t(m_center.first), float_t(m_center.second));
effect.get_parameter("pImageTexel").set_float2(float(1.f / width), float(1.f / height));
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size * ST_OVERSAMPLE_MULTIPLIER));
effect.get_parameter("pAngle").set_float(float(m_angle / _size));
effect.get_parameter("pCenter").set_float2(float(m_center.first), float(m_center.second));
effect.get_parameter("pKernel").set_value(kernel.data(), ST_KERNEL_SIZE);
// First Pass
@ -563,8 +563,8 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_zoom
return _input_texture;
}
float_t width = float_t(_input_texture->get_width());
float_t height = float_t(_input_texture->get_height());
float width = float(_input_texture->get_width());
float height = float(_input_texture->get_height());
// Setup
gs_set_cull_mode(GS_NEITHER);
@ -581,10 +581,10 @@ std::shared_ptr<::streamfx::obs::gs::texture> streamfx::gfx::blur::gaussian_zoom
gs_stencil_op(GS_STENCIL_BOTH, GS_ZERO, GS_ZERO, GS_ZERO);
effect.get_parameter("pImage").set_texture(_input_texture);
effect.get_parameter("pImageTexel").set_float2(float_t(1.f / width), float_t(1.f / height));
effect.get_parameter("pStepScale").set_float2(float_t(_step_scale.first), float_t(_step_scale.second));
effect.get_parameter("pSize").set_float(float_t(_size));
effect.get_parameter("pCenter").set_float2(float_t(m_center.first), float_t(m_center.second));
effect.get_parameter("pImageTexel").set_float2(float(1.f / width), float(1.f / height));
effect.get_parameter("pStepScale").set_float2(float(_step_scale.first), float(_step_scale.second));
effect.get_parameter("pSize").set_float(float(_size));
effect.get_parameter("pCenter").set_float2(float(m_center.first), float(m_center.second));
effect.get_parameter("pKernel").set_value(kernel.data(), ST_KERNEL_SIZE);
// First Pass

View File

@ -30,7 +30,7 @@ namespace streamfx::gfx {
std::shared_ptr<streamfx::gfx::util> get_gfx_util();
std::vector<float_t> const& get_kernel(std::size_t width);
std::vector<float> const& get_kernel(std::size_t width);
};
class gaussian_factory : public ::streamfx::gfx::blur::ifactory {

View File

@ -0,0 +1,9 @@
# AUTOGENERATED COPYRIGHT HEADER START
# Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
# AUTOGENERATED COPYRIGHT HEADER END
cmake_minimum_required(VERSION 3.26)
project("ColorGrade")
list(APPEND CMAKE_MESSAGE_INDENT "[${PROJECT_NAME}] ")
streamfx_add_component("Color Grade")

View File

@ -156,12 +156,12 @@ void color_grade_instance::allocate_rendertarget(gs_color_format format)
_cache_rt = std::make_unique<streamfx::obs::gs::rendertarget>(format, GS_ZS_NONE);
}
float_t fix_gamma_value(double_t v)
float fix_gamma_value(double_t v)
{
if (v < 0.0) {
return static_cast<float_t>(-v + 1.0);
return static_cast<float>(-v + 1.0);
} else {
return static_cast<float_t>(1.0 / (v + 1.0));
return static_cast<float>(1.0 / (v + 1.0));
}
}
@ -174,38 +174,38 @@ void color_grade_instance::migrate(obs_data_t* data, uint64_t version) {}
void color_grade_instance::update(obs_data_t* data)
{
_lift.x = static_cast<float_t>(obs_data_get_double(data, ST_KEY_LIFT_(ST_RED)) / 100.0);
_lift.y = static_cast<float_t>(obs_data_get_double(data, ST_KEY_LIFT_(ST_GREEN)) / 100.0);
_lift.z = static_cast<float_t>(obs_data_get_double(data, ST_KEY_LIFT_(ST_BLUE)) / 100.0);
_lift.w = static_cast<float_t>(obs_data_get_double(data, ST_KEY_LIFT_(ST_ALL)) / 100.0);
_lift.x = static_cast<float>(obs_data_get_double(data, ST_KEY_LIFT_(ST_RED)) / 100.0);
_lift.y = static_cast<float>(obs_data_get_double(data, ST_KEY_LIFT_(ST_GREEN)) / 100.0);
_lift.z = static_cast<float>(obs_data_get_double(data, ST_KEY_LIFT_(ST_BLUE)) / 100.0);
_lift.w = static_cast<float>(obs_data_get_double(data, ST_KEY_LIFT_(ST_ALL)) / 100.0);
_gamma.x = fix_gamma_value(obs_data_get_double(data, ST_KEY_GAMMA_(ST_RED)) / 100.0);
_gamma.y = fix_gamma_value(obs_data_get_double(data, ST_KEY_GAMMA_(ST_GREEN)) / 100.0);
_gamma.z = fix_gamma_value(obs_data_get_double(data, ST_KEY_GAMMA_(ST_BLUE)) / 100.0);
_gamma.w = fix_gamma_value(obs_data_get_double(data, ST_KEY_GAMMA_(ST_ALL)) / 100.0);
_gain.x = static_cast<float_t>(obs_data_get_double(data, ST_KEY_GAIN_(ST_RED)) / 100.0);
_gain.y = static_cast<float_t>(obs_data_get_double(data, ST_KEY_GAIN_(ST_GREEN)) / 100.0);
_gain.z = static_cast<float_t>(obs_data_get_double(data, ST_KEY_GAIN_(ST_BLUE)) / 100.0);
_gain.w = static_cast<float_t>(obs_data_get_double(data, ST_KEY_GAIN_(ST_ALL)) / 100.0);
_offset.x = static_cast<float_t>(obs_data_get_double(data, ST_KEY_OFFSET_(ST_RED)) / 100.0);
_offset.y = static_cast<float_t>(obs_data_get_double(data, ST_KEY_OFFSET_(ST_GREEN)) / 100.0);
_offset.z = static_cast<float_t>(obs_data_get_double(data, ST_KEY_OFFSET_(ST_BLUE)) / 100.0);
_offset.w = static_cast<float_t>(obs_data_get_double(data, ST_KEY_OFFSET_(ST_ALL)) / 100.0);
_gain.x = static_cast<float>(obs_data_get_double(data, ST_KEY_GAIN_(ST_RED)) / 100.0);
_gain.y = static_cast<float>(obs_data_get_double(data, ST_KEY_GAIN_(ST_GREEN)) / 100.0);
_gain.z = static_cast<float>(obs_data_get_double(data, ST_KEY_GAIN_(ST_BLUE)) / 100.0);
_gain.w = static_cast<float>(obs_data_get_double(data, ST_KEY_GAIN_(ST_ALL)) / 100.0);
_offset.x = static_cast<float>(obs_data_get_double(data, ST_KEY_OFFSET_(ST_RED)) / 100.0);
_offset.y = static_cast<float>(obs_data_get_double(data, ST_KEY_OFFSET_(ST_GREEN)) / 100.0);
_offset.z = static_cast<float>(obs_data_get_double(data, ST_KEY_OFFSET_(ST_BLUE)) / 100.0);
_offset.w = static_cast<float>(obs_data_get_double(data, ST_KEY_OFFSET_(ST_ALL)) / 100.0);
_tint_detection = static_cast<detection_mode>(obs_data_get_int(data, ST_KEY_TINT_DETECTION));
_tint_luma = static_cast<luma_mode>(obs_data_get_int(data, ST_KEY_TINT_MODE));
_tint_exponent = static_cast<float_t>(obs_data_get_double(data, ST_KEY_TINT_EXPONENT));
_tint_low.x = static_cast<float_t>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_LOW, ST_RED)) / 100.0);
_tint_low.y = static_cast<float_t>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_LOW, ST_GREEN)) / 100.0);
_tint_low.z = static_cast<float_t>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_LOW, ST_BLUE)) / 100.0);
_tint_mid.x = static_cast<float_t>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_MID, ST_RED)) / 100.0);
_tint_mid.y = static_cast<float_t>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_MID, ST_GREEN)) / 100.0);
_tint_mid.z = static_cast<float_t>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_MID, ST_BLUE)) / 100.0);
_tint_hig.x = static_cast<float_t>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_HIGH, ST_RED)) / 100.0);
_tint_hig.y = static_cast<float_t>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_HIGH, ST_GREEN)) / 100.0);
_tint_hig.z = static_cast<float_t>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_HIGH, ST_BLUE)) / 100.0);
_correction.x = static_cast<float_t>(obs_data_get_double(data, ST_KEY_CORRECTION_(ST_HUE)) / 360.0);
_correction.y = static_cast<float_t>(obs_data_get_double(data, ST_KEY_CORRECTION_(ST_SATURATION)) / 100.0);
_correction.z = static_cast<float_t>(obs_data_get_double(data, ST_KEY_CORRECTION_(ST_LIGHTNESS)) / 100.0);
_correction.w = static_cast<float_t>(obs_data_get_double(data, ST_KEY_CORRECTION_(ST_CONTRAST)) / 100.0);
_tint_exponent = static_cast<float>(obs_data_get_double(data, ST_KEY_TINT_EXPONENT));
_tint_low.x = static_cast<float>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_LOW, ST_RED)) / 100.0);
_tint_low.y = static_cast<float>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_LOW, ST_GREEN)) / 100.0);
_tint_low.z = static_cast<float>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_LOW, ST_BLUE)) / 100.0);
_tint_mid.x = static_cast<float>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_MID, ST_RED)) / 100.0);
_tint_mid.y = static_cast<float>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_MID, ST_GREEN)) / 100.0);
_tint_mid.z = static_cast<float>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_MID, ST_BLUE)) / 100.0);
_tint_hig.x = static_cast<float>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_HIGH, ST_RED)) / 100.0);
_tint_hig.y = static_cast<float>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_HIGH, ST_GREEN)) / 100.0);
_tint_hig.z = static_cast<float>(obs_data_get_double(data, ST_KEY_TINT_(ST_TONE_HIGH, ST_BLUE)) / 100.0);
_correction.x = static_cast<float>(obs_data_get_double(data, ST_KEY_CORRECTION_(ST_HUE)) / 360.0);
_correction.y = static_cast<float>(obs_data_get_double(data, ST_KEY_CORRECTION_(ST_SATURATION)) / 100.0);
_correction.z = static_cast<float>(obs_data_get_double(data, ST_KEY_CORRECTION_(ST_LIGHTNESS)) / 100.0);
_correction.w = static_cast<float>(obs_data_get_double(data, ST_KEY_CORRECTION_(ST_CONTRAST)) / 100.0);
{
int64_t v = obs_data_get_int(data, ST_KEY_RENDERMODE);
@ -370,7 +370,7 @@ void color_grade_instance::video_render(gs_effect_t* shader)
{
auto op = _ccache_rt->render(width, height);
gs_ortho(0, static_cast<float_t>(width), 0, static_cast<float_t>(height), 0, 1);
gs_ortho(0, static_cast<float>(width), 0, static_cast<float>(height), 0, 1);
// Blank out the input cache.
gs_clear(GS_CLEAR_COLOR | GS_CLEAR_DEPTH, &blank, 0., 0);
@ -613,11 +613,9 @@ obs_properties_t* color_grade_factory::get_properties2(color_grade_instance* dat
{
obs_properties_t* pr = obs_properties_create();
#ifdef ENABLE_FRONTEND
{
obs_properties_add_button2(pr, S_MANUAL_OPEN, D_TRANSLATE(S_MANUAL_OPEN), streamfx::filter::color_grade::color_grade_factory::on_manual_open, nullptr);
}
#endif
{
obs_properties_t* grp = obs_properties_create();
@ -810,7 +808,6 @@ obs_properties_t* color_grade_factory::get_properties2(color_grade_instance* dat
return pr;
}
#ifdef ENABLE_FRONTEND
bool color_grade_factory::on_manual_open(obs_properties_t* props, obs_property_t* property, void* data)
{
try {
@ -824,7 +821,6 @@ bool color_grade_factory::on_manual_open(obs_properties_t* props, obs_property_t
return false;
}
}
#endif
std::shared_ptr<color_grade_factory> streamfx::filter::color_grade::color_grade_factory::instance()
{

View File

@ -43,7 +43,7 @@ namespace streamfx::filter::color_grade {
vec4 _offset;
detection_mode _tint_detection;
luma_mode _tint_luma;
float_t _tint_exponent;
float _tint_exponent;
vec3 _tint_low;
vec3 _tint_mid;
vec3 _tint_hig;
@ -83,7 +83,7 @@ namespace streamfx::filter::color_grade {
void rebuild_lut();
virtual void video_tick(float_t time) override;
virtual void video_tick(float time) override;
virtual void video_render(gs_effect_t* effect) override;
};
@ -98,9 +98,7 @@ namespace streamfx::filter::color_grade {
virtual obs_properties_t* get_properties2(color_grade_instance* data) override;
#ifdef ENABLE_FRONTEND
static bool on_manual_open(obs_properties_t* props, obs_property_t* property, void* data);
#endif
public: // Singleton
static std::shared_ptr<color_grade_factory> instance();

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2021-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2021-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2021-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -0,0 +1,23 @@
# AUTOGENERATED COPYRIGHT HEADER START
# Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
# AUTOGENERATED COPYRIGHT HEADER END
cmake_minimum_required(VERSION 3.26)
project("Denoising")
list(APPEND CMAKE_MESSAGE_INDENT "[${PROJECT_NAME}] ")
streamfx_add_component("Denoising"
RESOLVER streamfx_denoising_resolver
)
streamfx_add_component_dependency("NVIDIA" OPTIONAL)
function(streamfx_denoising_resolver)
# Providers
#- NVIDIA
streamfx_enabled_component("NVIDIA" T_CHECK)
if(T_CHECK)
target_compile_definitions(${COMPONENT_TARGET} PRIVATE
ENABLE_NVIDIA
)
endif()
endfunction()

View File

@ -30,7 +30,7 @@
#define ST_I18N_PROVIDER ST_I18N "." ST_KEY_PROVIDER
#define ST_I18N_PROVIDER_NVIDIA_DENOISING ST_I18N_PROVIDER ".NVIDIA.Denoising"
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
#define ST_KEY_NVIDIA_DENOISING "NVIDIA.Denoising"
#define ST_I18N_NVIDIA_DENOISING ST_I18N "." ST_KEY_NVIDIA_DENOISING
#define ST_KEY_NVIDIA_DENOISING_STRENGTH "NVIDIA.Denoising.Strength"
@ -121,7 +121,7 @@ denoising_instance::~denoising_instance()
// TODO: Make this asynchronous.
switch (_provider) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case denoising_provider::NVIDIA_DENOISING:
nvvfx_denoising_unload();
break;
@ -157,7 +157,7 @@ void denoising_instance::update(obs_data_t* data)
std::unique_lock<std::mutex> ul(_provider_lock);
switch (_provider) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case denoising_provider::NVIDIA_DENOISING:
nvvfx_denoising_update(data);
break;
@ -171,7 +171,7 @@ void denoising_instance::update(obs_data_t* data)
void streamfx::filter::denoising::denoising_instance::properties(obs_properties_t* properties)
{
switch (_provider_ui) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case denoising_provider::NVIDIA_DENOISING:
nvvfx_denoising_properties(properties);
break;
@ -191,7 +191,7 @@ uint32_t streamfx::filter::denoising::denoising_instance::get_height()
return std::max<uint32_t>(_size.second, 1);
}
void denoising_instance::video_tick(float_t time)
void denoising_instance::video_tick(float time)
{
auto parent = obs_filter_get_parent(_self);
auto target = obs_filter_get_target(_self);
@ -208,7 +208,7 @@ void denoising_instance::video_tick(float_t time)
std::unique_lock<std::mutex> ul(_provider_lock);
switch (_provider) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case denoising_provider::NVIDIA_DENOISING:
nvvfx_denoising_size();
break;
@ -252,7 +252,7 @@ void denoising_instance::video_render(gs_effect_t* effect)
{ // Allow the provider to restrict the size.
switch (_provider) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case denoising_provider::NVIDIA_DENOISING:
nvvfx_denoising_size();
break;
@ -305,7 +305,7 @@ void denoising_instance::video_render(gs_effect_t* effect)
::streamfx::obs::gs::debug_marker profiler1{::streamfx::obs::gs::debug_color_convert, "Process"};
#endif
switch (_provider) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case denoising_provider::NVIDIA_DENOISING:
nvvfx_denoising_process();
break;
@ -401,7 +401,7 @@ void streamfx::filter::denoising::denoising_instance::task_switch_provider(util:
try {
// 3. Unload the previous provider.
switch (spd->provider) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case denoising_provider::NVIDIA_DENOISING:
nvvfx_denoising_unload();
break;
@ -412,7 +412,7 @@ void streamfx::filter::denoising::denoising_instance::task_switch_provider(util:
// 4. Load the new provider.
switch (_provider) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case denoising_provider::NVIDIA_DENOISING:
nvvfx_denoising_load();
break;
@ -431,7 +431,7 @@ void streamfx::filter::denoising::denoising_instance::task_switch_provider(util:
}
}
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
void streamfx::filter::denoising::denoising_instance::nvvfx_denoising_load()
{
_nvidia_fx = std::make_shared<::streamfx::nvidia::vfx::denoising>();
@ -493,7 +493,7 @@ denoising_factory::denoising_factory()
bool any_available = false;
// 1. Try and load any configured providers.
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
try {
// Load CVImage and Video Effects SDK.
_nvcuda = ::streamfx::nvidia::cuda::obs::get();
@ -543,7 +543,7 @@ void denoising_factory::get_defaults2(obs_data_t* data)
{
obs_data_set_default_int(data, ST_KEY_PROVIDER, static_cast<int64_t>(denoising_provider::AUTOMATIC));
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
obs_data_set_default_double(data, ST_KEY_NVIDIA_DENOISING_STRENGTH, 1.);
#endif
}
@ -565,11 +565,9 @@ obs_properties_t* denoising_factory::get_properties2(denoising_instance* data)
{
obs_properties_t* pr = obs_properties_create();
#ifdef ENABLE_FRONTEND
{
obs_properties_add_button2(pr, S_MANUAL_OPEN, D_TRANSLATE(S_MANUAL_OPEN), denoising_factory::on_manual_open, nullptr);
}
#endif
if (data) {
data->properties(pr);
@ -590,18 +588,16 @@ obs_properties_t* denoising_factory::get_properties2(denoising_instance* data)
return pr;
}
#ifdef ENABLE_FRONTEND
bool denoising_factory::on_manual_open(obs_properties_t* props, obs_property_t* property, void* data)
{
streamfx::open_url(HELP_URL);
return false;
}
#endif
bool streamfx::filter::denoising::denoising_factory::is_provider_available(denoising_provider provider)
{
switch (provider) {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
case denoising_provider::NVIDIA_DENOISING:
return _nvidia_available;
#endif

View File

@ -16,7 +16,7 @@
#include <mutex>
#include "warning-enable.hpp"
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
#include "nvidia/vfx/nvidia-vfx-denoising.hpp"
#endif
@ -48,7 +48,7 @@ namespace streamfx::filter::denoising {
std::shared_ptr<::streamfx::obs::gs::texture> _output;
bool _dirty;
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
std::shared_ptr<::streamfx::nvidia::vfx::denoising> _nvidia_fx;
#endif
@ -64,14 +64,14 @@ namespace streamfx::filter::denoising {
uint32_t get_width() override;
uint32_t get_height() override;
void video_tick(float_t time) override;
void video_tick(float time) override;
void video_render(gs_effect_t* effect) override;
private:
void switch_provider(denoising_provider provider);
void task_switch_provider(util::threadpool::task_data_t data);
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
void nvvfx_denoising_load();
void nvvfx_denoising_unload();
void nvvfx_denoising_size();
@ -82,7 +82,7 @@ namespace streamfx::filter::denoising {
};
class denoising_factory : public obs::source_factory<::streamfx::filter::denoising::denoising_factory, ::streamfx::filter::denoising::denoising_instance> {
#ifdef ENABLE_FILTER_DENOISING_NVIDIA
#ifdef ENABLE_NVIDIA
bool _nvidia_available;
std::shared_ptr<::streamfx::nvidia::cuda::obs> _nvcuda;
std::shared_ptr<::streamfx::nvidia::cv::cv> _nvcvi;
@ -98,9 +98,7 @@ namespace streamfx::filter::denoising {
virtual void get_defaults2(obs_data_t* data) override;
virtual obs_properties_t* get_properties2(denoising_instance* data) override;
#ifdef ENABLE_FRONTEND
static bool on_manual_open(obs_properties_t* props, obs_property_t* property, void* data);
#endif
bool is_provider_available(denoising_provider);
denoising_provider find_ideal_provider();

View File

@ -0,0 +1,9 @@
# AUTOGENERATED COPYRIGHT HEADER START
# Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
# AUTOGENERATED COPYRIGHT HEADER END
cmake_minimum_required(VERSION 3.26)
project("DynamicMask")
list(APPEND CMAKE_MESSAGE_INDENT "[${PROJECT_NAME}] ")
streamfx_add_component("Dynamic Mask")

View File

@ -158,11 +158,11 @@ void dynamic_mask_instance::update(obs_data_t* settings)
}
std::string chv_key = std::string(ST_KEY_CHANNEL_VALUE) + "." + kv1.second;
found->second.value = static_cast<float_t>(obs_data_get_double(settings, chv_key.c_str()));
found->second.value = static_cast<float>(obs_data_get_double(settings, chv_key.c_str()));
_precalc.base.ptr[static_cast<size_t>(kv1.first)] = found->second.value;
std::string chm_key = std::string(ST_KEY_CHANNEL_MULTIPLIER) + "." + kv1.second;
found->second.scale = static_cast<float_t>(obs_data_get_double(settings, chm_key.c_str()));
found->second.scale = static_cast<float>(obs_data_get_double(settings, chm_key.c_str()));
_precalc.scale.ptr[static_cast<size_t>(kv1.first)] = found->second.scale;
vec4* ch = &_precalc.matrix.x;
@ -185,7 +185,7 @@ void dynamic_mask_instance::update(obs_data_t* settings)
for (auto kv2 : channel_translations) {
std::string ab_key = std::string(ST_KEY_CHANNEL_INPUT) + "." + kv1.second + "." + kv2.second;
found->second.values.ptr[static_cast<size_t>(kv2.first)] = static_cast<float_t>(obs_data_get_double(settings, ab_key.c_str()));
found->second.values.ptr[static_cast<size_t>(kv2.first)] = static_cast<float>(obs_data_get_double(settings, ab_key.c_str()));
ch->ptr[static_cast<size_t>(kv2.first)] = found->second.values.ptr[static_cast<size_t>(kv2.first)];
}
}
@ -721,11 +721,9 @@ obs_properties_t* dynamic_mask_factory::get_properties2(dynamic_mask_instance* d
_translation_cache.clear();
#ifdef ENABLE_FRONTEND
{
obs_properties_add_button2(props, S_MANUAL_OPEN, D_TRANSLATE(S_MANUAL_OPEN), streamfx::filter::dynamic_mask::dynamic_mask_factory::on_manual_open, nullptr);
}
#endif
{ // Input
p = obs_properties_add_list(props, ST_KEY_INPUT, D_TRANSLATE(ST_I18N_INPUT), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
@ -806,7 +804,6 @@ std::string dynamic_mask_factory::translate_string(const char* format, ...)
return std::string(buffer.data(), buffer.data() + len);
}
#ifdef ENABLE_FRONTEND
bool dynamic_mask_factory::on_manual_open(obs_properties_t* props, obs_property_t* property, void* data)
{
try {
@ -820,7 +817,6 @@ bool dynamic_mask_factory::on_manual_open(obs_properties_t* props, obs_property_
return false;
}
}
#endif
std::shared_ptr<dynamic_mask_factory> dynamic_mask_factory::instance()
{

View File

@ -72,8 +72,8 @@ namespace streamfx::filter::dynamic_mask {
int64_t _debug_texture;
struct channel_data {
float_t value = 0.0;
float_t scale = 1.0;
float value = 0.0;
float scale = 1.0;
vec4 values = {0, 0, 0, 0};
};
std::map<channel, channel_data> _channels;
@ -94,7 +94,7 @@ namespace streamfx::filter::dynamic_mask {
virtual void save(obs_data_t* settings) override;
virtual gs_color_space video_get_color_space(size_t count, const gs_color_space* preferred_spaces) override;
virtual void video_tick(float_t time) override;
virtual void video_tick(float time) override;
virtual void video_render(gs_effect_t* effect) override;
void enum_active_sources(obs_source_enum_proc_t enum_callback, void* param) override;
@ -124,9 +124,7 @@ namespace streamfx::filter::dynamic_mask {
std::string translate_string(const char* format, ...);
#ifdef ENABLE_FRONTEND
static bool on_manual_open(obs_properties_t* props, obs_property_t* property, void* data);
#endif
public: // Singleton
static void initialize();

View File

@ -0,0 +1,27 @@
# AUTOGENERATED COPYRIGHT HEADER START
# Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
# AUTOGENERATED COPYRIGHT HEADER END
cmake_minimum_required(VERSION 3.26)
project("FFmpeg")
list(APPEND CMAKE_MESSAGE_INDENT "[${PROJECT_NAME}] ")
streamfx_add_component(${PROJECT_NAME})
find_package("FFmpeg"
COMPONENTS "avutil" "avcodec" "swscale"
)
if(NOT FFmpeg_FOUND)
streamfx_disable_component(${COMPONENT_TARGET} "FFmpeg is not available.")
return()
else()
target_link_libraries(${COMPONENT_TARGET}
PUBLIC
${FFMPEG_LIBRARIES}
)
target_include_directories(${COMPONENT_TARGET}
PUBLIC
${FFMPEG_INCLUDE_DIRS}
)
endif()

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2021-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#include "av1.hpp"

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2021-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -1,6 +1,6 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2022 Carsten Braun <info@braun-cloud.de>
// Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -10,7 +10,6 @@
#include "strings.hpp"
#include "codecs/hevc.hpp"
#include "ffmpeg/tools.hpp"
#include "handlers/debug_handler.hpp"
#include "obs/gs/gs-helper.hpp"
#include "plugin.hpp"
@ -32,24 +31,6 @@ extern "C" {
#include "warning-enable.hpp"
}
#ifdef ENABLE_ENCODER_FFMPEG_AMF
#include "handlers/amf_h264_handler.hpp"
#include "handlers/amf_hevc_handler.hpp"
#endif
#ifdef ENABLE_ENCODER_FFMPEG_NVENC
#include "handlers/nvenc_h264_handler.hpp"
#include "handlers/nvenc_hevc_handler.hpp"
#endif
#ifdef ENABLE_ENCODER_FFMPEG_PRORES
#include "handlers/prores_aw_handler.hpp"
#endif
#ifdef ENABLE_ENCODER_FFMPEG_DNXHR
#include "handlers/dnxhd_handler.hpp"
#endif
#ifdef WIN32
#include "ffmpeg/hwapi/d3d11.hpp"
#endif
@ -176,7 +157,7 @@ ffmpeg_instance::~ffmpeg_instance()
void ffmpeg_instance::get_properties(obs_properties_t* props)
{
if (_handler)
_handler->get_properties(props, _codec, _context, _handler->is_hardware_encoder(_factory));
_handler->properties(this->_factory, this, props);
obs_property_set_enabled(obs_properties_get(props, ST_KEY_KEYFRAMES_INTERVALTYPE), false);
obs_property_set_enabled(obs_properties_get(props, ST_KEY_KEYFRAMES_INTERVAL_SECONDS), false);
@ -189,7 +170,7 @@ void ffmpeg_instance::get_properties(obs_properties_t* props)
void ffmpeg_instance::migrate(obs_data_t* settings, uint64_t version)
{
if (_handler)
_handler->migrate(settings, version, _codec, _context);
_handler->migrate(this->_factory, this, settings, version);
}
bool ffmpeg_instance::update(obs_data_t* settings)
@ -199,7 +180,7 @@ bool ffmpeg_instance::update(obs_data_t* settings)
bool support_reconfig_gpu = false;
bool support_reconfig_keyframes = false;
if (_handler) {
support_reconfig = _handler->supports_reconfigure(_factory, support_reconfig_threads, support_reconfig_gpu, support_reconfig_keyframes);
support_reconfig = _handler->is_reconfigurable(_factory, support_reconfig_threads, support_reconfig_gpu, support_reconfig_keyframes);
}
if (!_context->internal) {
@ -245,7 +226,7 @@ bool ffmpeg_instance::update(obs_data_t* settings)
if (!_context->internal || (support_reconfig && support_reconfig_keyframes)) {
// Keyframes
if (_handler && _handler->has_keyframe_support(_factory)) {
if (_handler && _handler->has_keyframes(_factory)) {
// Key-Frame Options
obs_video_info ovi;
if (!obs_get_video_info(&ovi)) {
@ -268,7 +249,7 @@ bool ffmpeg_instance::update(obs_data_t* settings)
if (!_context->internal || support_reconfig) {
// Handler Options
if (_handler)
_handler->update(settings, _codec, _context);
_handler->update(this->_factory, this, settings);
{ // FFmpeg Custom Options
const char* opts = obs_data_get_string(settings, ST_KEY_FFMPEG_CUSTOMSETTINGS);
@ -279,7 +260,7 @@ bool ffmpeg_instance::update(obs_data_t* settings)
// Handler Overrides
if (_handler)
_handler->override_update(this, settings);
_handler->override_update(this->_factory, this, settings);
}
// Handler Logging
@ -310,7 +291,7 @@ bool ffmpeg_instance::update(obs_data_t* settings)
}
if (_handler) {
_handler->log_options(settings, _codec, _context);
_handler->log(this->_factory, this, settings);
}
}
@ -423,46 +404,44 @@ bool ffmpeg_instance::encode_video(uint32_t handle, int64_t pts, uint64_t lock_k
void ffmpeg_instance::initialize_sw(obs_data_t* settings)
{
if (_codec->type == AVMEDIA_TYPE_VIDEO) {
// Initialize Video Encoding
auto voi = video_output_get_info(obs_encoder_video(_self));
// Initialize Video Encoding
auto voi = video_output_get_info(obs_encoder_video(_self));
// Figure out a suitable pixel format to convert to if necessary.
AVPixelFormat pix_fmt_source = ::streamfx::ffmpeg::tools::obs_videoformat_to_avpixelformat(voi->format);
AVPixelFormat pix_fmt_target = AV_PIX_FMT_NONE;
{
if (_codec->pix_fmts) {
pix_fmt_target = ::streamfx::ffmpeg::tools::get_least_lossy_format(_codec->pix_fmts, pix_fmt_source);
} else { // If there are no supported formats, just pass in the current one.
pix_fmt_target = pix_fmt_source;
}
if (_handler) // Allow Handler to override the automatic color format for sanity reasons.
_handler->override_colorformat(pix_fmt_target, settings, _codec, _context);
// Figure out a suitable pixel format to convert to if necessary.
AVPixelFormat pix_fmt_source = ::streamfx::ffmpeg::tools::obs_videoformat_to_avpixelformat(voi->format);
AVPixelFormat pix_fmt_target = AV_PIX_FMT_NONE;
{
if (_codec->pix_fmts) {
pix_fmt_target = ::streamfx::ffmpeg::tools::get_least_lossy_format(_codec->pix_fmts, pix_fmt_source);
} else { // If there are no supported formats, just pass in the current one.
pix_fmt_target = pix_fmt_source;
}
// Setup from OBS information.
::streamfx::ffmpeg::tools::context_setup_from_obs(voi, _context);
if (_handler) // Allow Handler to override the automatic color format for sanity reasons.
_handler->override_colorformat(this->_factory, this, settings, pix_fmt_target);
}
// Override with other information.
_context->width = static_cast<int>(obs_encoder_get_width(_self));
_context->height = static_cast<int>(obs_encoder_get_height(_self));
_context->pix_fmt = pix_fmt_target;
// Setup from OBS information.
::streamfx::ffmpeg::tools::context_setup_from_obs(voi, _context);
_scaler.set_source_size(static_cast<uint32_t>(_context->width), static_cast<uint32_t>(_context->height));
_scaler.set_source_color(_context->color_range == AVCOL_RANGE_JPEG, _context->colorspace);
_scaler.set_source_format(pix_fmt_source);
// Override with other information.
_context->width = static_cast<int>(obs_encoder_get_width(_self));
_context->height = static_cast<int>(obs_encoder_get_height(_self));
_context->pix_fmt = pix_fmt_target;
_scaler.set_target_size(static_cast<uint32_t>(_context->width), static_cast<uint32_t>(_context->height));
_scaler.set_target_color(_context->color_range == AVCOL_RANGE_JPEG, _context->colorspace);
_scaler.set_target_format(pix_fmt_target);
_scaler.set_source_size(static_cast<uint32_t>(_context->width), static_cast<uint32_t>(_context->height));
_scaler.set_source_color(_context->color_range == AVCOL_RANGE_JPEG, _context->colorspace);
_scaler.set_source_format(pix_fmt_source);
// Create Scaler
if (!_scaler.initialize(SWS_SINC | SWS_FULL_CHR_H_INT | SWS_FULL_CHR_H_INP | SWS_ACCURATE_RND | SWS_BITEXACT)) {
std::stringstream sstr;
sstr << "Initializing scaler failed for conversion from '" << ::streamfx::ffmpeg::tools::get_pixel_format_name(_scaler.get_source_format()) << "' to '" << ::streamfx::ffmpeg::tools::get_pixel_format_name(_scaler.get_target_format()) << "' with color space '" << ::streamfx::ffmpeg::tools::get_color_space_name(_scaler.get_source_colorspace()) << "' and " << (_scaler.is_source_full_range() ? "full" : "partial") << " range.";
throw std::runtime_error(sstr.str());
}
_scaler.set_target_size(static_cast<uint32_t>(_context->width), static_cast<uint32_t>(_context->height));
_scaler.set_target_color(_context->color_range == AVCOL_RANGE_JPEG, _context->colorspace);
_scaler.set_target_format(pix_fmt_target);
// Create Scaler
if (!_scaler.initialize(SWS_SINC | SWS_FULL_CHR_H_INT | SWS_FULL_CHR_H_INP | SWS_ACCURATE_RND | SWS_BITEXACT)) {
std::stringstream sstr;
sstr << "Initializing scaler failed for conversion from '" << ::streamfx::ffmpeg::tools::get_pixel_format_name(_scaler.get_source_format()) << "' to '" << ::streamfx::ffmpeg::tools::get_pixel_format_name(_scaler.get_target_format()) << "' with color space '" << ::streamfx::ffmpeg::tools::get_color_space_name(_scaler.get_source_colorspace()) << "' and " << (_scaler.is_source_full_range() ? "full" : "partial") << " range.";
throw std::runtime_error(sstr.str());
}
}
@ -634,8 +613,9 @@ int ffmpeg_instance::receive_packet(bool* received_packet, struct encoder_packet
}
// Allow Handler Post-Processing
if (_handler)
_handler->process_avpacket(_packet, _codec, _context);
//FIXME! Is this still necessary?
//if (_handler)
// _handler->process_avpacket(_packet, _codec, _context);
// Build packet for use in OBS.
packet->type = OBS_ENCODER_VIDEO;
@ -799,7 +779,7 @@ const AVCodec* ffmpeg_instance::get_avcodec()
return _codec;
}
const AVCodecContext* ffmpeg_instance::get_avcodeccontext()
AVCodecContext* ffmpeg_instance::get_avcodeccontext()
{
return _context;
}
@ -960,10 +940,10 @@ ffmpeg_factory::ffmpeg_factory(ffmpeg_manager* manager, const AVCodec* codec) :
// Find any available handlers for this codec.
if (_handler = manager->get_handler(_avcodec->name); _handler) {
// Override any found info with the one specified by the handler.
_handler->adjust_info(this, _avcodec, _id, _name, _codec);
_handler->adjust_info(this, _id, _name, _codec);
// Add texture capability for hardware encoders.
if (_handler->is_hardware_encoder(this)) {
if (_handler->is_hardware(this)) {
_info.caps |= OBS_ENCODER_CAP_PASS_TEXTURE;
}
} else {
@ -1007,9 +987,9 @@ const char* ffmpeg_factory::get_name()
void ffmpeg_factory::get_defaults2(obs_data_t* settings)
{
if (_handler) {
_handler->get_defaults(settings, _avcodec, nullptr, _handler->is_hardware_encoder(this));
_handler->defaults(this, settings);
if (_handler->has_keyframe_support(this)) {
if (_handler->has_keyframes(this)) {
obs_data_set_default_int(settings, ST_KEY_KEYFRAMES_INTERVALTYPE, 0);
obs_data_set_default_double(settings, ST_KEY_KEYFRAMES_INTERVAL_SECONDS, 2.0);
obs_data_set_default_int(settings, ST_KEY_KEYFRAMES_INTERVAL_FRAMES, 300);
@ -1027,7 +1007,7 @@ void ffmpeg_factory::get_defaults2(obs_data_t* settings)
void ffmpeg_factory::migrate(obs_data_t* data, uint64_t version)
{
if (_handler)
_handler->migrate(data, version, _avcodec, nullptr);
_handler->migrate(this, nullptr, data, version);
}
static bool modified_keyframes(obs_properties_t* props, obs_property_t*, obs_data_t* settings) noexcept
@ -1050,20 +1030,18 @@ obs_properties_t* ffmpeg_factory::get_properties2(instance_t* data)
{
obs_properties_t* props = obs_properties_create();
#ifdef ENABLE_FRONTEND
{
obs_properties_add_button2(props, S_MANUAL_OPEN, D_TRANSLATE(S_MANUAL_OPEN), streamfx::encoder::ffmpeg::ffmpeg_factory::on_manual_open, this);
}
#endif
if (data) {
data->get_properties(props);
}
if (_handler)
_handler->get_properties(props, _avcodec, nullptr, _handler->is_hardware_encoder(this));
_handler->properties(this, data, props);
if (_handler && _handler->has_keyframe_support(this)) {
if (_handler && _handler->has_keyframes(this)) {
// Key-Frame Options
obs_properties_t* grp = props;
if (!streamfx::util::are_property_groups_broken()) {
@ -1099,11 +1077,11 @@ obs_properties_t* ffmpeg_factory::get_properties2(instance_t* data)
auto p = obs_properties_add_text(grp, ST_KEY_FFMPEG_CUSTOMSETTINGS, D_TRANSLATE(ST_I18N_FFMPEG_CUSTOMSETTINGS), obs_text_type::OBS_TEXT_DEFAULT);
}
if (_handler && _handler->is_hardware_encoder(this)) {
if (_handler && _handler->is_hardware(this)) {
auto p = obs_properties_add_int(grp, ST_KEY_FFMPEG_GPU, D_TRANSLATE(ST_I18N_FFMPEG_GPU), -1, std::numeric_limits<uint8_t>::max(), 1);
}
if (_handler && _handler->has_threading_support(this)) {
if (_handler && _handler->has_threading(this)) {
auto p = obs_properties_add_int_slider(grp, ST_KEY_FFMPEG_THREADS, D_TRANSLATE(ST_I18N_FFMPEG_THREADS), 0, static_cast<int64_t>(std::thread::hardware_concurrency()) * 2, 1);
}
@ -1128,14 +1106,12 @@ obs_properties_t* ffmpeg_factory::get_properties2(instance_t* data)
return props;
}
#ifdef ENABLE_FRONTEND
bool ffmpeg_factory::on_manual_open(obs_properties_t* props, obs_property_t* property, void* data)
{
ffmpeg_factory* ptr = static_cast<ffmpeg_factory*>(data);
streamfx::open_url(ptr->_handler->get_help_url(ptr->_avcodec));
streamfx::open_url(ptr->_handler->help(ptr));
return false;
}
#endif
const AVCodec* ffmpeg_factory::get_avcodec()
{
@ -1147,25 +1123,8 @@ obs_encoder_info* streamfx::encoder::ffmpeg::ffmpeg_factory::get_info()
return &_info;
}
ffmpeg_manager::ffmpeg_manager() : _factories(), _handlers(), _debug_handler()
ffmpeg_manager::ffmpeg_manager() : _factories()
{
// Handlers
_debug_handler = ::std::make_shared<handler::debug_handler>();
#ifdef ENABLE_ENCODER_FFMPEG_AMF
register_handler("h264_amf", ::std::make_shared<handler::amf_h264_handler>());
register_handler("hevc_amf", ::std::make_shared<handler::amf_hevc_handler>());
#endif
#ifdef ENABLE_ENCODER_FFMPEG_NVENC
register_handler("h264_nvenc", ::std::make_shared<handler::nvenc_h264_handler>());
register_handler("hevc_nvenc", ::std::make_shared<handler::nvenc_hevc_handler>());
#endif
#ifdef ENABLE_ENCODER_FFMPEG_PRORES
register_handler("prores_aw", ::std::make_shared<handler::prores_aw_handler>());
#endif
#ifdef ENABLE_ENCODER_FFMPEG_DNXHR
register_handler("dnxhd", ::std::make_shared<handler::dnxhd_handler>());
#endif
// Encoders
void* iterator = nullptr;
for (const AVCodec* codec = av_codec_iterate(&iterator); codec != nullptr; codec = av_codec_iterate(&iterator)) {
@ -1173,7 +1132,7 @@ ffmpeg_manager::ffmpeg_manager() : _factories(), _handlers(), _debug_handler()
if (!av_codec_is_encoder(codec))
continue;
if ((codec->type == AVMediaType::AVMEDIA_TYPE_AUDIO) || (codec->type == AVMediaType::AVMEDIA_TYPE_VIDEO)) {
if (codec->type == AVMediaType::AVMEDIA_TYPE_VIDEO) {
try {
_factories.emplace(codec, std::make_shared<ffmpeg_factory>(this, codec));
} catch (const std::exception& ex) {
@ -1188,28 +1147,6 @@ ffmpeg_manager::~ffmpeg_manager()
_factories.clear();
}
void ffmpeg_manager::register_handler(std::string codec, std::shared_ptr<handler::handler> handler)
{
_handlers.emplace(codec, handler);
}
std::shared_ptr<handler::handler> ffmpeg_manager::get_handler(std::string codec)
{
auto fnd = _handlers.find(codec);
if (fnd != _handlers.end())
return fnd->second;
#ifdef _DEBUG
return _debug_handler;
#else
return nullptr;
#endif
}
bool ffmpeg_manager::has_handler(std::string_view codec)
{
return (_handlers.find(codec.data()) != _handlers.end());
}
std::shared_ptr<ffmpeg_manager> ffmpeg_manager::instance()
{
static std::weak_ptr<ffmpeg_manager> winst;
@ -1224,6 +1161,30 @@ std::shared_ptr<ffmpeg_manager> ffmpeg_manager::instance()
return instance;
}
streamfx::encoder::ffmpeg::handler* ffmpeg_manager::find_handler(std::string_view codec)
{
auto handlers = streamfx::encoder::ffmpeg::handler::handlers();
if (auto kv = handlers.find(std::string{codec}); kv != handlers.end()) {
return kv->second;
}
#ifdef _DEBUG
if (auto kv = handlers.find(""); kv != handlers.end()) {
return kv->second;
}
#endif
return nullptr;
}
streamfx::encoder::ffmpeg::handler* ffmpeg_manager::get_handler(std::string_view codec)
{
return find_handler(codec);
}
bool ffmpeg_manager::has_handler(std::string_view codec)
{
return find_handler(codec) != nullptr;
}
static std::shared_ptr<ffmpeg_manager> loader_instance;
static auto loader = streamfx::loader(

View File

@ -5,10 +5,10 @@
#pragma once
#include "common.hpp"
#include "encoders/ffmpeg/handler.hpp"
#include "ffmpeg/avframe-queue.hpp"
#include "ffmpeg/hwapi/base.hpp"
#include "ffmpeg/swscale.hpp"
#include "handlers/handler.hpp"
#include "obs/obs-encoder-factory.hpp"
#include "warning-disable.hpp"
@ -17,16 +17,15 @@
#include <mutex>
#include <queue>
#include <stack>
#include <string>
#include <string_view>
#include <thread>
#include <vector>
#include "warning-enable.hpp"
extern "C" {
#include "warning-disable.hpp"
#include <libavcodec/avcodec.h>
#include <libavutil/frame.h>
#include "warning-enable.hpp"
}
#include "warning-enable.hpp"
namespace streamfx::encoder::ffmpeg {
class ffmpeg_instance;
@ -38,7 +37,7 @@ namespace streamfx::encoder::ffmpeg {
const AVCodec* _codec;
AVCodecContext* _context;
std::shared_ptr<handler::handler> _handler;
streamfx::encoder::ffmpeg::handler* _handler;
::streamfx::ffmpeg::swscale _scaler;
std::shared_ptr<AVPacket> _packet;
@ -104,7 +103,7 @@ namespace streamfx::encoder::ffmpeg {
const AVCodec* get_avcodec();
const AVCodecContext* get_avcodeccontext();
AVCodecContext* get_avcodeccontext();
void parse_ffmpeg_commandline(std::string_view text);
};
@ -116,7 +115,7 @@ namespace streamfx::encoder::ffmpeg {
const AVCodec* _avcodec;
std::shared_ptr<handler::handler> _handler;
streamfx::encoder::ffmpeg::handler* _handler;
public:
ffmpeg_factory(ffmpeg_manager* manager, const AVCodec* codec);
@ -130,9 +129,7 @@ namespace streamfx::encoder::ffmpeg {
obs_properties_t* get_properties2(instance_t* data) override;
#ifdef ENABLE_FRONTEND
static bool on_manual_open(obs_properties_t* props, obs_property_t* property, void* data);
#endif
public:
const AVCodec* get_avcodec();
@ -142,16 +139,14 @@ namespace streamfx::encoder::ffmpeg {
class ffmpeg_manager {
std::map<const AVCodec*, std::shared_ptr<ffmpeg_factory>> _factories;
std::map<std::string, std::shared_ptr<handler::handler>> _handlers;
std::shared_ptr<handler::handler> _debug_handler;
public:
ffmpeg_manager();
~ffmpeg_manager();
void register_handler(std::string codec, std::shared_ptr<handler::handler> handler);
streamfx::encoder::ffmpeg::handler* find_handler(std::string_view codec);
std::shared_ptr<handler::handler> get_handler(std::string codec);
streamfx::encoder::ffmpeg::handler* get_handler(std::string_view codec);
bool has_handler(std::string_view codec);

View File

@ -6,14 +6,20 @@
// THIS FEATURE IS DEPRECATED. SUBMITTED PATCHES WILL BE REJECTED.
//--------------------------------------------------------------------------------//
#include "amf_shared.hpp"
#include "amf.hpp"
#include "common.hpp"
#include "strings.hpp"
#include "encoders/codecs/h264.hpp"
#include "encoders/codecs/hevc.hpp"
#include "encoders/encoder-ffmpeg.hpp"
#include "ffmpeg/tools.hpp"
#include "plugin.hpp"
extern "C" {
#include "warning-disable.hpp"
extern "C" {
#include <libavutil/opt.h>
#include "warning-enable.hpp"
}
#include "warning-enable.hpp"
// Translation
#define ST_I18N "Encoder.FFmpeg.AMF"
@ -60,7 +66,17 @@ extern "C" {
#define ST_KEY_OTHER_VBAQ "Other.VBAQ"
#define ST_KEY_OTHER_ACCESSUNITDELIMITER "Other.AccessUnitDelimiter"
using namespace streamfx::encoder::ffmpeg::handler;
// Settings
#define ST_KEY_H264_PROFILE "H264.Profile"
#define ST_KEY_H264_LEVEL "H264.Level"
// Settings
#define ST_KEY_HEVC_PROFILE "H265.Profile"
#define ST_KEY_HEVC_TIER "H265.Tier"
#define ST_KEY_HEVC_LEVEL "H265.Level"
using namespace streamfx::encoder::ffmpeg;
using namespace streamfx::encoder::codec;
std::map<amf::preset, std::string> amf::presets{
{amf::preset::SPEED, ST_I18N_PRESET_("Speed")},
@ -88,7 +104,30 @@ std::map<amf::ratecontrolmode, std::string> amf::ratecontrolmode_to_opt{
{amf::ratecontrolmode::VBR_LATENCY, "vbr_latency"},
};
bool streamfx::encoder::ffmpeg::handler::amf::is_available()
static std::map<h264::profile, std::string> h264_profiles{
{h264::profile::CONSTRAINED_BASELINE, "constrained_baseline"},
{h264::profile::MAIN, "main"},
{h264::profile::HIGH, "high"},
};
static std::map<h264::level, std::string> h264_levels{
{h264::level::L1_0, "1.0"}, {h264::level::L1_0b, "1.0b"}, {h264::level::L1_1, "1.1"}, {h264::level::L1_2, "1.2"}, {h264::level::L1_3, "1.3"}, {h264::level::L2_0, "2.0"}, {h264::level::L2_1, "2.1"}, {h264::level::L2_2, "2.2"}, {h264::level::L3_0, "3.0"}, {h264::level::L3_1, "3.1"}, {h264::level::L3_2, "3.2"}, {h264::level::L4_0, "4.0"}, {h264::level::L4_1, "4.1"}, {h264::level::L4_2, "4.2"}, {h264::level::L5_0, "5.0"}, {h264::level::L5_1, "5.1"}, {h264::level::L5_2, "5.2"}, {h264::level::L6_0, "6.0"}, {h264::level::L6_1, "6.1"}, {h264::level::L6_2, "6.2"},
};
static std::map<hevc::profile, std::string> hevc_profiles{
{hevc::profile::MAIN, "main"},
};
static std::map<hevc::tier, std::string> hevc_tiers{
{hevc::tier::MAIN, "main"},
{hevc::tier::HIGH, "high"},
};
static std::map<hevc::level, std::string> hevc_levels{
{hevc::level::L1_0, "1.0"}, {hevc::level::L2_0, "2.0"}, {hevc::level::L2_1, "2.1"}, {hevc::level::L3_0, "3.0"}, {hevc::level::L3_1, "3.1"}, {hevc::level::L4_0, "4.0"}, {hevc::level::L4_1, "4.1"}, {hevc::level::L5_0, "5.0"}, {hevc::level::L5_1, "5.1"}, {hevc::level::L5_2, "5.2"}, {hevc::level::L6_0, "6.0"}, {hevc::level::L6_1, "6.1"}, {hevc::level::L6_2, "6.2"},
};
bool streamfx::encoder::ffmpeg::amf::is_available()
{
#if defined(D_PLATFORM_WINDOWS)
#if defined(D_PLATFORM_64BIT)
@ -96,7 +135,7 @@ bool streamfx::encoder::ffmpeg::handler::amf::is_available()
#else
std::filesystem::path lib_name = std::filesystem::u8path("amfrt32.dll");
#endif
#elif defined(D_PLATFORM_LINUX)
#else
#if defined(D_PLATFORM_64BIT)
std::filesystem::path lib_name = std::filesystem::u8path("libamfrt64.so.1");
#else
@ -111,7 +150,7 @@ bool streamfx::encoder::ffmpeg::handler::amf::is_available()
}
}
void amf::get_defaults(obs_data_t* settings, const AVCodec* codec, AVCodecContext* context)
void amf::defaults(ffmpeg_factory* factory, obs_data_t* settings)
{
obs_data_set_default_int(settings, ST_KEY_PRESET, static_cast<int64_t>(amf::preset::BALANCED));
@ -173,7 +212,7 @@ static bool modified_ratecontrol(obs_properties_t* props, obs_property_t*, obs_d
return true;
}
void amf::get_properties_pre(obs_properties_t* props, const AVCodec* codec)
void amf::properties_before(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
{
auto p = obs_properties_add_text(props, "[[deprecated]]", D_TRANSLATE(ST_I18N_DEPRECATED), OBS_TEXT_INFO);
@ -187,8 +226,10 @@ void amf::get_properties_pre(obs_properties_t* props, const AVCodec* codec)
}
}
void amf::get_properties_post(obs_properties_t* props, const AVCodec* codec)
void amf::properties_after(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
auto codec = factory->get_avcodec();
{ // Rate Control
obs_properties_t* grp = obs_properties_create();
obs_properties_add_group(props, ST_I18N_RATECONTROL, D_TRANSLATE(ST_I18N_RATECONTROL), OBS_GROUP_NORMAL, grp);
@ -257,8 +298,15 @@ void amf::get_properties_post(obs_properties_t* props, const AVCodec* codec)
}
}
void amf::update(obs_data_t* settings, const AVCodec* codec, AVCodecContext* context)
void amf::properties_runtime(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props) {}
void amf::migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version) {}
void amf::update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
// Alway enable loop filter.
context->flags |= AV_CODEC_FLAG_LOOP_FILTER;
@ -396,10 +444,15 @@ void amf::update(obs_data_t* settings, const AVCodec* codec, AVCodecContext* con
}
}
void amf::log_options(obs_data_t* settings, const AVCodec* codec, AVCodecContext* context)
void amf::override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) {}
void amf::log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
using namespace ::streamfx::ffmpeg;
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
DLOG_INFO("[%s] AMD AMF:", codec->name);
tools::print_av_option_string2(context, "usage", " Usage", [](int64_t v, std::string_view o) { return std::string(o); });
tools::print_av_option_string2(context, "quality", " Preset", [](int64_t v, std::string_view o) { return std::string(o); });
@ -438,8 +491,278 @@ void amf::log_options(obs_data_t* settings, const AVCodec* codec, AVCodecContext
tools::print_av_option_bool(context, "me_quarter_pel", " Quarter-Pel Motion Estimation");
}
void streamfx::encoder::ffmpeg::handler::amf::get_runtime_properties(obs_properties_t* props, const AVCodec* codec, AVCodecContext* context) {}
// H264 Handler
//--------------
void streamfx::encoder::ffmpeg::handler::amf::migrate(obs_data_t* settings, uint64_t version, const AVCodec* codec, AVCodecContext* context) {}
amf_h264::amf_h264() : handler("h264_amf") {}
void streamfx::encoder::ffmpeg::handler::amf::override_update(ffmpeg_instance* instance, obs_data_t* settings) {}
amf_h264::~amf_h264() {}
bool amf_h264::has_keyframes(ffmpeg_factory* instance)
{
return true;
}
bool amf_h264::is_hardware(ffmpeg_factory* instance)
{
return true;
}
bool amf_h264::has_threading(ffmpeg_factory* instance)
{
return false;
}
void streamfx::encoder::ffmpeg::amf_h264::adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec)
{
name = "AMD AMF H.264/AVC (via FFmpeg)";
if (!amf::is_available())
factory->get_info()->caps |= OBS_ENCODER_CAP_DEPRECATED;
factory->get_info()->caps |= OBS_ENCODER_CAP_DEPRECATED;
}
void amf_h264::defaults(ffmpeg_factory* factory, obs_data_t* settings)
{
amf::defaults(factory, settings);
obs_data_set_default_int(settings, ST_KEY_H264_PROFILE, static_cast<int64_t>(h264::profile::HIGH));
obs_data_set_default_int(settings, ST_KEY_H264_LEVEL, static_cast<int64_t>(h264::level::UNKNOWN));
}
void amf_h264::properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
if (!instance) {
this->get_encoder_properties(factory, instance, props);
} else {
this->get_runtime_properties(factory, instance, props);
}
}
void amf_h264::migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version)
{
amf::migrate(factory, instance, settings, version);
}
void amf_h264::update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
amf::update(factory, instance, settings);
{
auto found = h264_profiles.find(static_cast<h264::profile>(obs_data_get_int(settings, ST_KEY_H264_PROFILE)));
if (found != h264_profiles.end()) {
av_opt_set(context->priv_data, "profile", found->second.c_str(), 0);
}
}
{
auto found = h264_levels.find(static_cast<h264::level>(obs_data_get_int(settings, ST_KEY_H264_LEVEL)));
if (found != h264_levels.end()) {
av_opt_set(context->priv_data, "level", found->second.c_str(), 0);
} else {
av_opt_set(context->priv_data, "level", "auto", 0);
}
}
}
void amf_h264::override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
amf::override_update(factory, instance, settings);
}
void amf_h264::log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
amf::log(factory, instance, settings);
DLOG_INFO("[%s] H.264/AVC:", codec->name);
::streamfx::ffmpeg::tools::print_av_option_string2(context, context->priv_data, "profile", " Profile", [](int64_t v, std::string_view o) { return std::string(o); });
::streamfx::ffmpeg::tools::print_av_option_string2(context, context->priv_data, "level", " Level", [](int64_t v, std::string_view o) { return std::string(o); });
}
void amf_h264::get_encoder_properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
amf::properties_before(factory, instance, props);
{
obs_properties_t* grp = obs_properties_create();
obs_properties_add_group(props, S_CODEC_H264, D_TRANSLATE(S_CODEC_H264), OBS_GROUP_NORMAL, grp);
{
auto p = obs_properties_add_list(grp, ST_KEY_H264_PROFILE, D_TRANSLATE(S_CODEC_H264_PROFILE), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_INT);
obs_property_list_add_int(p, D_TRANSLATE(S_STATE_DEFAULT), static_cast<int64_t>(h264::profile::UNKNOWN));
for (auto const kv : h264_profiles) {
std::string trans = std::string(S_CODEC_H264_PROFILE) + "." + kv.second;
obs_property_list_add_int(p, D_TRANSLATE(trans.c_str()), static_cast<int64_t>(kv.first));
}
}
{
auto p = obs_properties_add_list(grp, ST_KEY_H264_LEVEL, D_TRANSLATE(S_CODEC_H264_LEVEL), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_INT);
obs_property_list_add_int(p, D_TRANSLATE(S_STATE_AUTOMATIC), static_cast<int64_t>(h264::level::UNKNOWN));
for (auto const kv : h264_levels) {
obs_property_list_add_int(p, kv.second.c_str(), static_cast<int64_t>(kv.first));
}
}
}
amf::properties_after(factory, instance, props);
}
void amf_h264::get_runtime_properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
amf::properties_runtime(factory, instance, props);
}
void amf_h264::override_colorformat(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, AVPixelFormat& target_format)
{
target_format = AV_PIX_FMT_NV12;
}
static auto inst_h264 = amf_h264();
// H265/HEVC Handler
//-------------------
amf_hevc::amf_hevc() : handler("hevc_amf") {}
amf_hevc::~amf_hevc(){};
bool amf_hevc::has_keyframes(ffmpeg_factory* instance)
{
return true;
}
bool amf_hevc::is_hardware(ffmpeg_factory* instance)
{
return true;
}
bool amf_hevc::has_threading(ffmpeg_factory* instance)
{
return false;
}
void streamfx::encoder::ffmpeg::amf_hevc::adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec)
{
name = "AMD AMF H.265/HEVC (via FFmpeg)";
if (!amf::is_available())
factory->get_info()->caps |= OBS_ENCODER_CAP_DEPRECATED;
factory->get_info()->caps |= OBS_ENCODER_CAP_DEPRECATED;
}
void amf_hevc::defaults(ffmpeg_factory* factory, obs_data_t* settings)
{
amf::defaults(factory, settings);
obs_data_set_default_int(settings, ST_KEY_HEVC_PROFILE, static_cast<int64_t>(hevc::profile::MAIN));
obs_data_set_default_int(settings, ST_KEY_HEVC_TIER, static_cast<int64_t>(hevc::profile::MAIN));
obs_data_set_default_int(settings, ST_KEY_HEVC_LEVEL, static_cast<int64_t>(hevc::level::UNKNOWN));
}
void amf_hevc::properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
if (!instance) {
this->get_encoder_properties(factory, instance, props);
} else {
this->get_runtime_properties(factory, instance, props);
}
}
void amf_hevc::migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version)
{
amf::migrate(factory, instance, settings, version);
}
void amf_hevc::update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
amf::update(factory, instance, settings);
{ // HEVC Options
auto found = hevc_profiles.find(static_cast<hevc::profile>(obs_data_get_int(settings, ST_KEY_HEVC_PROFILE)));
if (found != hevc_profiles.end()) {
av_opt_set(context->priv_data, "profile", found->second.c_str(), 0);
}
}
{
auto found = hevc_tiers.find(static_cast<hevc::tier>(obs_data_get_int(settings, ST_KEY_HEVC_TIER)));
if (found != hevc_tiers.end()) {
av_opt_set(context->priv_data, "tier", found->second.c_str(), 0);
}
}
{
auto found = hevc_levels.find(static_cast<hevc::level>(obs_data_get_int(settings, ST_KEY_HEVC_LEVEL)));
if (found != hevc_levels.end()) {
av_opt_set(context->priv_data, "level", found->second.c_str(), 0);
} else {
av_opt_set(context->priv_data, "level", "auto", 0);
}
}
}
void amf_hevc::override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
amf::override_update(factory, instance, settings);
}
void amf_hevc::log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
amf::log(factory, instance, settings);
DLOG_INFO("[%s] H.265/HEVC:", codec->name);
::streamfx::ffmpeg::tools::print_av_option_string2(context, "profile", " Profile", [](int64_t v, std::string_view o) { return std::string(o); });
::streamfx::ffmpeg::tools::print_av_option_string2(context, "level", " Level", [](int64_t v, std::string_view o) { return std::string(o); });
::streamfx::ffmpeg::tools::print_av_option_string2(context, "tier", " Tier", [](int64_t v, std::string_view o) { return std::string(o); });
}
void amf_hevc::get_encoder_properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
amf::properties_before(factory, instance, props);
{
obs_properties_t* grp = obs_properties_create();
obs_properties_add_group(props, S_CODEC_HEVC, D_TRANSLATE(S_CODEC_HEVC), OBS_GROUP_NORMAL, grp);
{
auto p = obs_properties_add_list(grp, ST_KEY_HEVC_PROFILE, D_TRANSLATE(S_CODEC_HEVC_PROFILE), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_INT);
obs_property_list_add_int(p, D_TRANSLATE(S_STATE_DEFAULT), static_cast<int64_t>(hevc::profile::UNKNOWN));
for (auto const kv : hevc_profiles) {
std::string trans = std::string(S_CODEC_HEVC_PROFILE) + "." + kv.second;
obs_property_list_add_int(p, D_TRANSLATE(trans.c_str()), static_cast<int64_t>(kv.first));
}
}
{
auto p = obs_properties_add_list(grp, ST_KEY_HEVC_TIER, D_TRANSLATE(S_CODEC_HEVC_TIER), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_INT);
obs_property_list_add_int(p, D_TRANSLATE(S_STATE_DEFAULT), static_cast<int64_t>(hevc::tier::UNKNOWN));
for (auto const kv : hevc_tiers) {
std::string trans = std::string(S_CODEC_HEVC_TIER) + "." + kv.second;
obs_property_list_add_int(p, D_TRANSLATE(trans.c_str()), static_cast<int64_t>(kv.first));
}
}
{
auto p = obs_properties_add_list(grp, ST_KEY_HEVC_LEVEL, D_TRANSLATE(S_CODEC_HEVC_LEVEL), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_INT);
obs_property_list_add_int(p, D_TRANSLATE(S_STATE_AUTOMATIC), static_cast<int64_t>(hevc::level::UNKNOWN));
for (auto const kv : hevc_levels) {
obs_property_list_add_int(p, kv.second.c_str(), static_cast<int64_t>(kv.first));
}
}
}
amf::properties_after(factory, instance, props);
}
void amf_hevc::get_runtime_properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
amf::properties_runtime(factory, instance, props);
}
static auto inst_hevc = amf_hevc();

View File

@ -0,0 +1,164 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// THIS FEATURE IS DEPRECATED. SUBMITTED PATCHES WILL BE REJECTED.
#pragma once
#include "encoders/encoder-ffmpeg.hpp"
#include "encoders/ffmpeg/handler.hpp"
#include "warning-disable.hpp"
#include <cinttypes>
extern "C" {
#include <libavcodec/avcodec.h>
}
#include "warning-enable.hpp"
/* Parameters by their codec specific name.
* '#' denotes a parameter specified via the context itself.
H.264 H.265 Options Done?
usage usage transcoding --
preset preset speed,balanced,quality Defines
profile profile <different> Defines
level level <different> Defines
tier main,high
rc rc cqp,cbr,vbr_peak,vbr_latency Defines
preanalysis preanalysis false,true Defines
vbaq vbaq false,true Defines
enforce_hrd enforce_hrd false,true Defines
filler_data filler_data false,true --
frame_skipping skip_frame false,true Defines
qp_i qp_i range(-1 - 51) Defines
qp_p qp_p range(-1 - 51) Defines
qp_b range(-1 - 51) Defines
#max_b_frames Defines
bf_delta_qp range(-10 - 10) --
bf_ref false,true Defines
bf_ref_delta_qp range(-10 - 10) --
me_half_pel me_half_pel false,true --
me_quarter_pel me_quarter_pel false,true --
aud aud false,true Defines
max_au_size max_au_size range(0 - Inf) --
#refs range(0 - 16?) Defines
#color_range AVCOL_RANGE_JPEG FFmpeg
#bit_rate Defines
#rc_max_rate Defines
#rc_buffer_size Defines
#rc_initial_buffer_occupancy --
#flags AV_CODEC_FLAG_LOOP_FILTER --
#gop_size FFmpeg
*/
// AMF H.264
// intra_refresh_mb: 0 - Inf
// header_spacing: -1 - 1000
// coder: auto, cavlc, cabac
// qmin, qmax (HEVC uses its own settings)
// AMF H.265
// header_insertion_mode: none, gop, idr
// gops_per_idr: 0 - Inf
// min_qp_i: -1 - 51
// max_qp_i: -1 - 51
// min_qp_p: -1 - 51
// max_qp_p: -1 - 51
namespace streamfx::encoder::ffmpeg {
namespace amf {
enum class preset : int32_t {
SPEED,
BALANCED,
QUALITY,
INVALID = -1,
};
enum class ratecontrolmode : int64_t {
CQP,
CBR,
VBR_PEAK,
VBR_LATENCY,
INVALID = -1,
};
extern std::map<preset, std::string> presets;
extern std::map<preset, std::string> preset_to_opt;
extern std::map<ratecontrolmode, std::string> ratecontrolmodes;
extern std::map<ratecontrolmode, std::string> ratecontrolmode_to_opt;
bool is_available();
void defaults(ffmpeg_factory* factory, obs_data_t* settings);
void properties_before(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
void properties_after(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
void properties_runtime(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
void migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version);
void update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
void override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
void log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
} // namespace amf
class amf_h264 : public handler {
public:
amf_h264();
virtual ~amf_h264();
bool has_keyframes(ffmpeg_factory* instance) override;
bool is_hardware(ffmpeg_factory* instance) override;
bool has_threading(ffmpeg_factory* instance) override;
void adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec) override;
virtual std::string help(ffmpeg_factory* factory) override
{
return "https://github.com/Xaymar/obs-StreamFX/wiki/Encoder-FFmpeg-AMF";
};
void defaults(ffmpeg_factory* factory, obs_data_t* settings) override;
void properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props) override;
void migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version) override;
void update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
void override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
void log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
void override_colorformat(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, AVPixelFormat& target_format) override;
private:
void get_encoder_properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
void get_runtime_properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
};
class amf_hevc : public handler {
public:
amf_hevc();
virtual ~amf_hevc();
bool has_keyframes(ffmpeg_factory* instance) override;
bool is_hardware(ffmpeg_factory* instance) override;
bool has_threading(ffmpeg_factory* instance) override;
void adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec) override;
std::string help(ffmpeg_factory* factory) override
{
return "https://github.com/Xaymar/obs-StreamFX/wiki/Encoder-FFmpeg-AMF";
};
void defaults(ffmpeg_factory* factory, obs_data_t* settings) override;
void properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props) override;
void migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version) override;
void update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
void override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
void log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
private:
void get_encoder_properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
void get_runtime_properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
};
} // namespace streamfx::encoder::ffmpeg

View File

@ -0,0 +1,89 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2020 Daniel Molkentin <daniel@molkentin.de>
#include "cfhd.hpp"
#include "common.hpp"
#include "encoders/encoder-ffmpeg.hpp"
#include "ffmpeg/tools.hpp"
#include "handler.hpp"
#include "plugin.hpp"
#include "warning-disable.hpp"
#include <map>
#include <string>
#include <utility>
#include <vector>
extern "C" {
#include <libavutil/opt.h>
}
#include "warning-enable.hpp"
using namespace streamfx::encoder::ffmpeg;
struct strings {
struct quality {
static constexpr const char* ffmpeg = "quality";
static constexpr const char* obs = "Quality";
static constexpr const char* i18n = "Encoder.FFmpeg.CineForm.Quality";
};
};
cfhd::cfhd() : handler("cfhd") {}
bool cfhd::has_keyframes(ffmpeg_factory* factory)
{
return false;
}
void cfhd::properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
// Try and acquire a valid context.
std::shared_ptr<AVCodecContext> ctx;
if (instance) {
ctx = std::shared_ptr<AVCodecContext>(instance->get_avcodeccontext(), [](AVCodecContext*) {});
} else { // If we don't have a context, create a temporary one that is automatically freed.
ctx = std::shared_ptr<AVCodecContext>(avcodec_alloc_context3(factory->get_avcodec()), [](AVCodecContext* v) { avcodec_free_context(&v); });
if (!ctx->priv_data) {
return;
}
}
{ // Quality parameter
auto to_string = [](const char* v) {
char buffer[1024];
snprintf(buffer, sizeof(buffer), "%s.%s", strings::quality::i18n, v);
return D_TRANSLATE(buffer);
};
auto p = obs_properties_add_list(props, strings::quality::obs, D_TRANSLATE(strings::quality::i18n), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
streamfx::ffmpeg::tools::avoption_list_add_entries(ctx->priv_data, strings::quality::ffmpeg, [&p, &to_string](const AVOption* opt) {
// FFmpeg returns this list in the wrong order. We want to start at the lowest, and go to the highest.
// So simply always insert at the top, and this will reverse the list.
obs_property_list_insert_string(p, 0, to_string(opt->name), opt->name);
});
}
}
std::string cfhd::help(ffmpeg_factory* factory)
{
return "https://github.com/Xaymar/obs-StreamFX/wiki/Encoder-FFmpeg-GoPro-CineForm";
}
void cfhd::defaults(ffmpeg_factory* factory, obs_data_t* settings)
{
obs_data_set_string(settings, strings::quality::obs, "film3+");
}
void cfhd::migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version) {}
void cfhd::update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
if (const char* v = obs_data_get_string(settings, strings::quality::obs); v && (v[0] != '\0')) {
av_opt_set(instance->get_avcodeccontext()->priv_data, strings::quality::ffmpeg, v, AV_OPT_SEARCH_CHILDREN);
}
}
static cfhd handler = cfhd();

View File

@ -0,0 +1,26 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once
#include "handler.hpp"
namespace streamfx::encoder::ffmpeg {
class cfhd : public handler {
public:
cfhd();
virtual ~cfhd(){};
bool has_keyframes(ffmpeg_factory* factory) override;
std::string help(ffmpeg_factory* factory) override;
void defaults(ffmpeg_factory* factory, obs_data_t* settings) override;
void properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props) override;
void migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version) override;
void update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
};
} // namespace streamfx::encoder::ffmpeg

View File

@ -3,8 +3,9 @@
// Copyright (C) 2020 Daniel Molkentin <daniel@molkentin.de>
// AUTOGENERATED COPYRIGHT HEADER END
#include "debug_handler.hpp"
#include "debug.hpp"
#include "common.hpp"
#include "../encoder-ffmpeg.hpp"
#include "handler.hpp"
#include "plugin.hpp"
@ -13,17 +14,10 @@
#include <string>
#include <utility>
#include <vector>
#include "warning-enable.hpp"
extern "C" {
#include "warning-disable.hpp"
#include <libavutil/opt.h>
#include "warning-enable.hpp"
}
using namespace streamfx::encoder::ffmpeg::handler;
void debug_handler::get_defaults(obs_data_t*, const AVCodec*, AVCodecContext*, bool) {}
#include "warning-enable.hpp"
template<typename T>
std::string to_string(T value)
@ -55,9 +49,15 @@ std::string to_string(double_t value)
return std::string(buf.data(), buf.data() + buf.size());
}
void debug_handler::get_properties(obs_properties_t*, const AVCodec* codec, AVCodecContext* context, bool)
using namespace streamfx::encoder::ffmpeg;
debug::debug() : handler("") {}
void debug::properties(ffmpeg_instance* instance, obs_properties_t* props)
{
if (context)
const AVCodec* codec = instance->get_avcodec();
if (instance->get_avcodeccontext())
return;
AVCodecContext* ctx = avcodec_alloc_context3(codec);
@ -161,4 +161,4 @@ void debug_handler::get_properties(obs_properties_t*, const AVCodec* codec, AVCo
}
}
void debug_handler::update(obs_data_t*, const AVCodec*, AVCodecContext*) {}
static debug handler();

View File

@ -0,0 +1,16 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once
#include "handler.hpp"
namespace streamfx::encoder::ffmpeg {
class debug : public handler {
public:
debug();
virtual ~debug(){};
virtual void properties(ffmpeg_instance* instance, obs_properties_t* props);
};
} // namespace streamfx::encoder::ffmpeg

View File

@ -4,7 +4,7 @@
// Copyright (C) 2022-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#include "dnxhd_handler.hpp"
#include "dnxhd.hpp"
#include "common.hpp"
#include "../codecs/dnxhr.hpp"
#include "ffmpeg/tools.hpp"
@ -14,17 +14,79 @@
#include <array>
#include "warning-enable.hpp"
using namespace streamfx::encoder::ffmpeg::handler;
using namespace streamfx::encoder::ffmpeg;
using namespace streamfx::encoder::codec::dnxhr;
void dnxhd_handler::adjust_info(ffmpeg_factory* fac, const AVCodec*, std::string&, std::string& name, std::string&)
inline const char* dnx_profile_to_display_name(const char* profile)
{
char buffer[1024];
snprintf(buffer, sizeof(buffer), "%s.%s", S_CODEC_DNXHR_PROFILE, profile);
return D_TRANSLATE(buffer);
}
dnxhd::dnxhd() : handler("dnxhd") {}
dnxhd::~dnxhd() {}
void dnxhd::adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec)
{
//Most people don't know what VC3 is and only know it as DNx.
//Change name to make it easier to find.
name = "Avid DNxHR (via FFmpeg)";
}
void dnxhd_handler::override_colorformat(AVPixelFormat& target_format, obs_data_t* settings, const AVCodec* codec, AVCodecContext*)
bool dnxhd::has_keyframes(ffmpeg_factory* instance)
{
return false;
}
void dnxhd::defaults(ffmpeg_factory* factory, obs_data_t* settings)
{
obs_data_set_default_string(settings, S_CODEC_DNXHR_PROFILE, "dnxhr_sq");
}
void dnxhd::properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
// Try and acquire a valid context.
std::shared_ptr<AVCodecContext> ctx;
if (instance) {
ctx = std::shared_ptr<AVCodecContext>(instance->get_avcodeccontext(), [](AVCodecContext*) {});
} else { // If we don't have a context, create a temporary one that is automatically freed.
ctx = std::shared_ptr<AVCodecContext>(avcodec_alloc_context3(factory->get_avcodec()), [](AVCodecContext* v) { avcodec_free_context(&v); });
if (!ctx->priv_data) {
return;
}
}
auto p = obs_properties_add_list(props, S_CODEC_DNXHR_PROFILE, D_TRANSLATE(S_CODEC_DNXHR_PROFILE), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
streamfx::ffmpeg::tools::avoption_list_add_entries(ctx->priv_data, "profile", [&p](const AVOption* opt) {
if (strcmp(opt->name, "dnxhd") == 0) {
//Do not show DNxHD profile as it is outdated and should not be used.
//It's also very picky about framerate and framesize combos, which makes it even less useful
return;
}
//ffmpeg returns the profiles for DNxHR from highest to lowest.
//Lowest to highest is what people usually expect.
//Therefore, new entries will always be inserted at the top, effectively reversing the list
obs_property_list_insert_string(p, 0, dnx_profile_to_display_name(opt->name), opt->name);
});
}
void dnxhd::update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
const char* profile = obs_data_get_string(settings, S_CODEC_DNXHR_PROFILE);
av_opt_set(instance->get_avcodeccontext(), "profile", profile, AV_OPT_SEARCH_CHILDREN);
}
void dnxhd::log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
DLOG_INFO("[%s] Avid DNxHR:", factory->get_avcodec()->name);
streamfx::ffmpeg::tools::print_av_option_string2(instance->get_avcodeccontext(), "profile", " Profile", [](int64_t v, std::string_view o) { return std::string(o); });
}
void dnxhd::override_colorformat(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, AVPixelFormat& target_format)
{
static const std::array<std::pair<const char*, AVPixelFormat>, static_cast<size_t>(5)> profile_to_format_map{std::pair{"dnxhr_lb", AV_PIX_FMT_YUV422P}, std::pair{"dnxhr_sq", AV_PIX_FMT_YUV422P}, std::pair{"dnxhr_hq", AV_PIX_FMT_YUV422P}, std::pair{"dnxhr_hqx", AV_PIX_FMT_YUV422P10}, std::pair{"dnxhr_444", AV_PIX_FMT_YUV444P10}};
@ -40,69 +102,4 @@ void dnxhd_handler::override_colorformat(AVPixelFormat& target_format, obs_data_
target_format = AV_PIX_FMT_YUV422P;
}
void dnxhd_handler::get_defaults(obs_data_t* settings, const AVCodec*, AVCodecContext*, bool)
{
obs_data_set_default_string(settings, S_CODEC_DNXHR_PROFILE, "dnxhr_sq");
}
bool dnxhd_handler::has_keyframe_support(ffmpeg_factory* instance)
{
return false;
}
bool dnxhd_handler::has_pixel_format_support(ffmpeg_factory* instance)
{
return false;
}
inline const char* dnx_profile_to_display_name(const char* profile)
{
char buffer[1024];
snprintf(buffer, sizeof(buffer), "%s.%s", S_CODEC_DNXHR_PROFILE, profile);
return D_TRANSLATE(buffer);
}
void dnxhd_handler::get_properties(obs_properties_t* props, const AVCodec* codec, AVCodecContext* context, bool)
{
AVCodecContext* ctx = context;
//Create dummy context if null was passed to the function
if (!ctx) {
ctx = avcodec_alloc_context3(codec);
if (!ctx->priv_data) {
avcodec_free_context(&ctx);
return;
}
}
auto p = obs_properties_add_list(props, S_CODEC_DNXHR_PROFILE, D_TRANSLATE(S_CODEC_DNXHR_PROFILE), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
streamfx::ffmpeg::tools::avoption_list_add_entries(ctx->priv_data, "profile", [&p](const AVOption* opt) {
if (strcmp(opt->name, "dnxhd") == 0) {
//Do not show DNxHD profile as it is outdated and should not be used.
//It's also very picky about framerate and framesize combos, which makes it even less useful
return;
}
//ffmpeg returns the profiles for DNxHR from highest to lowest.
//Lowest to highest is what people usually expect.
//Therefore, new entries will always be inserted at the top, effectively reversing the list
obs_property_list_insert_string(p, 0, dnx_profile_to_display_name(opt->name), opt->name);
});
//Free context if we created it here
if (ctx && ctx != context) {
avcodec_free_context(&ctx);
}
}
void dnxhd_handler::update(obs_data_t* settings, const AVCodec* codec, AVCodecContext* context)
{
const char* profile = obs_data_get_string(settings, S_CODEC_DNXHR_PROFILE);
av_opt_set(context, "profile", profile, AV_OPT_SEARCH_CHILDREN);
}
void dnxhd_handler::log_options(obs_data_t* settings, const AVCodec* codec, AVCodecContext* context)
{
DLOG_INFO("[%s] Avid DNxHR:", codec->name);
streamfx::ffmpeg::tools::print_av_option_string2(context, "profile", " Profile", [](int64_t v, std::string_view o) { return std::string(o); });
}
static auto inst = dnxhd();

View File

@ -0,0 +1,37 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
// Copyright (C) 2022 Carsten Braun <info@braun-cloud.de>
#pragma once
#include "encoders/encoder-ffmpeg.hpp"
#include "encoders/ffmpeg/handler.hpp"
#include "warning-disable.hpp"
extern "C" {
#include <libavcodec/avcodec.h>
}
#include "warning-enable.hpp"
namespace streamfx::encoder::ffmpeg {
class dnxhd : public handler {
public:
dnxhd();
virtual ~dnxhd();
virtual bool has_keyframes(ffmpeg_factory* factory);
virtual void adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec);
virtual std::string help(ffmpeg_factory* factory) {
return "https://github.com/Xaymar/obs-StreamFX/wiki/Encoder-FFmpeg-Avid-DNxHR";
}
virtual void defaults(ffmpeg_factory* factory, obs_data_t* settings);
virtual void properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
virtual void update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
virtual void log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
virtual void override_colorformat(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, AVPixelFormat& target_format);
};
} // namespace streamfx::encoder::ffmpeg

View File

@ -0,0 +1,81 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#include "handler.hpp"
#include "../encoder-ffmpeg.hpp"
streamfx::encoder::ffmpeg::handler::handler_map_t& streamfx::encoder::ffmpeg::handler::handlers()
{
static handler_map_t handlers;
return handlers;
}
streamfx::encoder::ffmpeg::handler::handler(std::string codec)
{
handlers().emplace(codec, this);
}
bool streamfx::encoder::ffmpeg::handler::has_keyframes(ffmpeg_factory* factory)
{
#if defined(AV_CODEC_PROP_INTRA_ONLY) // TODO: Determine if we need to check for an exact version.
if (auto* desc = avcodec_descriptor_get(factory->get_avcodec()->id); desc) {
return (desc->props & AV_CODEC_PROP_INTRA_ONLY) == 0;
}
#endif
#ifdef AV_CODEC_CAP_INTRA_ONLY
return (factory->get_avcodec()->capabilities & AV_CODEC_CAP_INTRA_ONLY) == 0;
#else
return false;
#endif
}
bool streamfx::encoder::ffmpeg::handler::has_threading(ffmpeg_factory* factory)
{
return (factory->get_avcodec()->capabilities
& (AV_CODEC_CAP_FRAME_THREADS | AV_CODEC_CAP_SLICE_THREADS
#if defined(AV_CODEC_CAP_OTHER_THREADS) // TODO: Determine if we need to check for an exact version.
| AV_CODEC_CAP_OTHER_THREADS
#else
| AV_CODEC_CAP_AUTO_THREADS
#endif
));
}
bool streamfx::encoder::ffmpeg::handler::is_hardware(ffmpeg_factory* factory)
{
if (factory->get_avcodec()->capabilities & AV_CODEC_CAP_HARDWARE) {
return true;
}
return false;
}
bool streamfx::encoder::ffmpeg::handler::is_reconfigurable(ffmpeg_factory* factory, bool& threads, bool& gpu, bool& keyframes)
{
if (factory->get_avcodec()->capabilities & AV_CODEC_CAP_PARAM_CHANGE) {
return true;
}
return false;
}
void streamfx::encoder::ffmpeg::handler::adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec) {}
std::string streamfx::encoder::ffmpeg::handler::help(ffmpeg_factory* factory)
{
return "about:blank";
}
void streamfx::encoder::ffmpeg::handler::defaults(ffmpeg_factory* factory, obs_data_t* settings) {}
void streamfx::encoder::ffmpeg::handler::properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props) {}
void streamfx::encoder::ffmpeg::handler::migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version) {}
void streamfx::encoder::ffmpeg::handler::update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) {}
void streamfx::encoder::ffmpeg::handler::override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) {}
void streamfx::encoder::ffmpeg::handler::log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) {}
void streamfx::encoder::ffmpeg::handler::override_colorformat(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, AVPixelFormat& target_format) {}

View File

@ -0,0 +1,47 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once
#include "warning-disable.hpp"
#include <cstdint>
#include <map>
#include <string>
extern "C" {
#include <obs.h>
#include <libavcodec/avcodec.h>
}
#include "warning-enable.hpp"
namespace streamfx::encoder::ffmpeg {
class ffmpeg_factory;
class ffmpeg_instance;
struct handler {
handler(std::string codec);
virtual ~handler(){};
virtual bool has_keyframes(ffmpeg_factory* factory);
virtual bool has_threading(ffmpeg_factory* factory);
virtual bool is_hardware(ffmpeg_factory* factory);
virtual bool is_reconfigurable(ffmpeg_factory* factory, bool& threads, bool& gpu, bool& keyframes);
virtual void adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec);
virtual std::string help(ffmpeg_factory* factory);
virtual void defaults(ffmpeg_factory* factory, obs_data_t* settings);
virtual void properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
virtual void migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version);
virtual void update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
virtual void override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
virtual void log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
virtual void override_colorformat(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, AVPixelFormat& target_format);
public:
typedef std::map<std::string, handler*> handler_map_t;
static handler_map_t& handlers();
};
} // namespace streamfx::encoder::ffmpeg

View File

@ -3,15 +3,20 @@
// Copyright (C) 2022 lainon <GermanAizek@yandex.ru>
// AUTOGENERATED COPYRIGHT HEADER END
#include "nvenc_shared.hpp"
#include "nvenc.hpp"
#include "common.hpp"
#include "strings.hpp"
#include "encoders/codecs/h264.hpp"
#include "encoders/codecs/hevc.hpp"
#include "encoders/encoder-ffmpeg.hpp"
#include "ffmpeg/tools.hpp"
#include "plugin.hpp"
extern "C" {
#include "warning-disable.hpp"
extern "C" {
#include <libavutil/opt.h>
#include "warning-enable.hpp"
}
#include "warning-enable.hpp"
#define ST_I18N_PRESET "Encoder.FFmpeg.NVENC.Preset"
#define ST_I18N_PRESET_(x) ST_I18N_PRESET "." D_VSTR(x)
@ -77,7 +82,15 @@ extern "C" {
#define ST_I18N_OTHER_LOWDELAYKEYFRAMESCALE ST_I18N_OTHER ".LowDelayKeyFrameScale"
#define ST_KEY_OTHER_LOWDELAYKEYFRAMESCALE "Other.LowDelayKeyFrameScale"
using namespace streamfx::encoder::ffmpeg::handler;
#define ST_KEY_H264_PROFILE "H264.Profile"
#define ST_KEY_H264_LEVEL "H264.Level"
#define ST_KEY_H265_PROFILE "H265.Profile"
#define ST_KEY_H265_TIER "H265.Tier"
#define ST_KEY_H265_LEVEL "H265.Level"
using namespace streamfx::encoder::ffmpeg;
using namespace streamfx::encoder::codec;
inline bool is_cqp(std::string_view rc)
{
@ -94,7 +107,7 @@ inline bool is_vbr(std::string_view rc)
return std::string_view("vbr") == rc;
}
bool streamfx::encoder::ffmpeg::handler::nvenc::is_available()
bool nvenc::is_available()
{
#if defined(D_PLATFORM_WINDOWS)
#if defined(D_PLATFORM_64BIT)
@ -113,37 +126,7 @@ bool streamfx::encoder::ffmpeg::handler::nvenc::is_available()
}
}
void nvenc::override_update(ffmpeg_instance* instance, obs_data_t*)
{
AVCodecContext* context = const_cast<AVCodecContext*>(instance->get_avcodeccontext());
int64_t rclookahead = 0;
int64_t surfaces = 0;
int64_t async_depth = 0;
av_opt_get_int(context, "rc-lookahead", AV_OPT_SEARCH_CHILDREN, &rclookahead);
av_opt_get_int(context, "surfaces", AV_OPT_SEARCH_CHILDREN, &surfaces);
av_opt_get_int(context, "async_depth", AV_OPT_SEARCH_CHILDREN, &async_depth);
// Calculate and set the number of surfaces to allocate (if not user overridden).
if (surfaces == 0) {
surfaces = std::max<int64_t>(4ll, (context->max_b_frames + 1ll) * 4ll);
if (rclookahead > 0) {
surfaces = std::max<int64_t>(1ll, std::max<int64_t>(surfaces, rclookahead + (context->max_b_frames + 5ll)));
} else if (context->max_b_frames > 0) {
surfaces = std::max<int64_t>(4ll, (context->max_b_frames + 1ll) * 4ll);
} else {
surfaces = 4;
}
av_opt_set_int(context, "surfaces", surfaces, AV_OPT_SEARCH_CHILDREN);
}
// Set delay
context->delay = std::min<int>(std::max<int>(static_cast<int>(async_depth), 3), static_cast<int>(surfaces - 1));
}
void nvenc::get_defaults(obs_data_t* settings, const AVCodec*, AVCodecContext*)
void nvenc::defaults(ffmpeg_factory* factory, obs_data_t* settings)
{
obs_data_set_default_string(settings, ST_KEY_PRESET, "default");
obs_data_set_default_string(settings, ST_I18N_TUNE, "hq");
@ -232,8 +215,10 @@ static bool modified_aq(obs_properties_t* props, obs_property_t*, obs_data_t* se
return true;
}
void nvenc::get_properties_pre(obs_properties_t* props, const AVCodec*, const AVCodecContext* context)
void nvenc::properties_before(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props, AVCodecContext* context)
{
auto codec = factory->get_avcodec();
{
auto p = obs_properties_add_list(props, ST_KEY_PRESET, D_TRANSLATE(ST_I18N_PRESET), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
streamfx::ffmpeg::tools::avoption_list_add_entries(context->priv_data, "preset", [&p](const AVOption* opt) {
@ -253,8 +238,10 @@ void nvenc::get_properties_pre(obs_properties_t* props, const AVCodec*, const AV
}
}
void nvenc::get_properties_post(obs_properties_t* props, const AVCodec* codec, const AVCodecContext* context)
void nvenc::properies_after(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props, AVCodecContext* context)
{
auto codec = factory->get_avcodec();
{ // Rate Control
obs_properties_t* grp = props;
if (!streamfx::util::are_property_groups_broken()) {
@ -420,7 +407,7 @@ void nvenc::get_properties_post(obs_properties_t* props, const AVCodec* codec, c
}
}
void nvenc::get_runtime_properties(obs_properties_t* props, const AVCodec*, AVCodecContext*)
void nvenc::properties_runtime(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
obs_property_set_enabled(obs_properties_get(props, ST_KEY_PRESET), false);
obs_property_set_enabled(obs_properties_get(props, ST_KEY_TUNE), false);
@ -456,8 +443,94 @@ void nvenc::get_runtime_properties(obs_properties_t* props, const AVCodec*, AVCo
obs_property_set_enabled(obs_properties_get(props, ST_KEY_OTHER_LOWDELAYKEYFRAMESCALE), false);
}
void nvenc::update(obs_data_t* settings, const AVCodec* codec, AVCodecContext* context)
void nvenc::migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version)
{
// Only test for A.B.C in A.B.C.D
version = version & STREAMFX_MASK_UPDATE;
#define COPY_UNSET(TYPE, FROM, TO) \
if (obs_data_has_user_value(settings, FROM)) { \
obs_data_set_##TYPE(settings, TO, obs_data_get_##TYPE(settings, FROM)); \
obs_data_unset_user_value(settings, FROM); \
}
if (version <= STREAMFX_MAKE_VERSION(0, 8, 0, 0)) {
COPY_UNSET(int, "RateControl.Bitrate.Target", ST_KEY_RATECONTROL_LIMITS_BITRATE_TARGET);
COPY_UNSET(int, "RateControl.Bitrate.Maximum", ST_KEY_RATECONTROL_LIMITS_BITRATE_TARGET);
COPY_UNSET(int, "RateControl.BufferSize", ST_KEY_RATECONTROL_LIMITS_BUFFERSIZE);
COPY_UNSET(int, "RateControl.Quality.Minimum", ST_KEY_RATECONTROL_QP_MINIMUM);
COPY_UNSET(int, "RateControl.Quality.Maximum", ST_KEY_RATECONTROL_QP_MAXIMUM);
COPY_UNSET(double, "RateControl.Quality.Target", ST_KEY_RATECONTROL_LIMITS_QUALITY);
}
if (version < STREAMFX_MAKE_VERSION(0, 11, 0, 0)) {
obs_data_unset_user_value(settings, "Other.AccessUnitDelimiter");
obs_data_unset_user_value(settings, "Other.DecodedPictureBufferSize");
}
if (version < STREAMFX_MAKE_VERSION(0, 11, 1, 0)) {
// Preset
if (auto v = obs_data_get_int(settings, ST_KEY_PRESET); v != -1) {
std::map<int64_t, std::string> preset{
{0, "default"}, {1, "slow"}, {2, "medium"}, {3, "fast"}, {4, "hp"}, {5, "hq"}, {6, "bd"}, {7, "ll"}, {8, "llhq"}, {9, "llhp"}, {10, "lossless"}, {11, "losslesshp"},
};
if (auto k = preset.find(v); k != preset.end()) {
obs_data_set_string(settings, ST_KEY_PRESET, k->second.data());
}
}
// Rate Control Mode
if (auto v = obs_data_get_int(settings, ST_KEY_RATECONTROL_MODE); v != -1) {
if (!obs_data_has_user_value(settings, ST_KEY_RATECONTROL_MODE))
v = 4;
switch (v) {
case 0: // CQP
obs_data_set_string(settings, ST_KEY_RATECONTROL_MODE, "constqp");
break;
case 2: // VBR_HQ
obs_data_set_int(settings, ST_KEY_RATECONTROL_TWOPASS, 1);
obs_data_set_string(settings, ST_KEY_RATECONTROL_MULTIPASS, "qres");
case 1: // VBR
obs_data_set_string(settings, ST_KEY_RATECONTROL_MODE, "vbr");
break;
case 5: // CBR_LD_HQ
obs_data_set_int(settings, ST_KEY_OTHER_LOWDELAYKEYFRAMESCALE, 1);
case 4: // CBR_HQ
obs_data_set_int(settings, ST_KEY_RATECONTROL_TWOPASS, 1);
obs_data_set_string(settings, ST_KEY_RATECONTROL_MULTIPASS, "qres");
case 3: // CBR
obs_data_set_string(settings, ST_KEY_RATECONTROL_MODE, "cbr");
break;
}
}
// Target Quality
if (auto v = obs_data_get_double(settings, ST_KEY_RATECONTROL_LIMITS_QUALITY); v > 0) {
obs_data_set_double(settings, ST_KEY_RATECONTROL_LIMITS_QUALITY, (v / 100.) * 51.);
}
// B-Frame Reference Modes
if (auto v = obs_data_get_int(settings, ST_KEY_OTHER_BFRAMEREFERENCEMODE); v != -1) {
std::map<int64_t, std::string> preset{
{0, "default"},
{1, "each"},
{2, "middle"},
};
if (auto k = preset.find(v); k != preset.end()) {
obs_data_set_string(settings, ST_KEY_OTHER_BFRAMEREFERENCEMODE, k->second.data());
}
}
}
#undef COPY_UNSET
}
void nvenc::update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
if (const char* v = obs_data_get_string(settings, ST_KEY_PRESET); !context->internal && (v != nullptr) && (v[0] != '\0')) {
av_opt_set(context->priv_data, "preset", v, AV_OPT_SEARCH_CHILDREN);
}
@ -516,8 +589,8 @@ void nvenc::update(obs_data_t* settings, const AVCodec* codec, AVCodecContext* c
if (!context->internal) {
if (streamfx::ffmpeg::tools::avoption_exists(context->priv_data, "multipass")) {
// Multi-Pass
if (const char* v = obs_data_get_string(settings, ST_KEY_RATECONTROL_MULTIPASS); (v != nullptr) && (v[0] != '\0')) {
av_opt_set(context->priv_data, "multipass", v, AV_OPT_SEARCH_CHILDREN);
if (const char* v2 = obs_data_get_string(settings, ST_KEY_RATECONTROL_MULTIPASS); (v2 != nullptr) && (v2[0] != '\0')) {
av_opt_set(context->priv_data, "multipass", v2, AV_OPT_SEARCH_CHILDREN);
av_opt_set_int(context->priv_data, "2pass", 0, AV_OPT_SEARCH_CHILDREN);
}
} else {
@ -667,8 +740,41 @@ void nvenc::update(obs_data_t* settings, const AVCodec* codec, AVCodecContext* c
}
}
void nvenc::log_options(obs_data_t*, const AVCodec* codec, AVCodecContext* context)
void nvenc::override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
AVCodecContext* context = const_cast<AVCodecContext*>(instance->get_avcodeccontext());
int64_t rclookahead = 0;
int64_t surfaces = 0;
int64_t async_depth = 0;
av_opt_get_int(context, "rc-lookahead", AV_OPT_SEARCH_CHILDREN, &rclookahead);
av_opt_get_int(context, "surfaces", AV_OPT_SEARCH_CHILDREN, &surfaces);
av_opt_get_int(context, "async_depth", AV_OPT_SEARCH_CHILDREN, &async_depth);
// Calculate and set the number of surfaces to allocate (if not user overridden).
if (surfaces == 0) {
surfaces = std::max<int64_t>(4ll, (context->max_b_frames + 1ll) * 4ll);
if (rclookahead > 0) {
surfaces = std::max<int64_t>(1ll, std::max<int64_t>(surfaces, rclookahead + (context->max_b_frames + 5ll)));
} else if (context->max_b_frames > 0) {
surfaces = std::max<int64_t>(4ll, (context->max_b_frames + 1ll) * 4ll);
} else {
surfaces = 4;
}
av_opt_set_int(context, "surfaces", surfaces, AV_OPT_SEARCH_CHILDREN);
}
// Set delay
context->delay = std::min<int>(std::max<int>(static_cast<int>(async_depth), 3), static_cast<int>(surfaces - 1));
}
void nvenc::log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
using namespace ::streamfx::ffmpeg;
DLOG_INFO("[%s] NVIDIA NVENC:", codec->name);
@ -729,85 +835,375 @@ void nvenc::log_options(obs_data_t*, const AVCodec* codec, AVCodecContext* conte
tools::print_av_option_bool(context, "constrained-encoding", " Constrained Encoding");
}
void streamfx::encoder::ffmpeg::handler::nvenc::migrate(obs_data_t* settings, uint64_t version, const AVCodec* codec, AVCodecContext* context)
// H264/AVC Handler
//-------------------
nvenc_h264::nvenc_h264() : handler("h264_nvenc"){};
nvenc_h264::~nvenc_h264(){};
bool nvenc_h264::has_keyframes(ffmpeg_factory*)
{
// Only test for A.B.C in A.B.C.D
version = version & STREAMFX_MASK_UPDATE;
return true;
}
#define COPY_UNSET(TYPE, FROM, TO) \
if (obs_data_has_user_value(settings, FROM)) { \
obs_data_set_##TYPE(settings, TO, obs_data_get_##TYPE(settings, FROM)); \
obs_data_unset_user_value(settings, FROM); \
bool nvenc_h264::has_threading(ffmpeg_factory*)
{
return false;
}
bool nvenc_h264::is_hardware(ffmpeg_factory*)
{
return true;
}
bool nvenc_h264::is_reconfigurable(ffmpeg_factory* instance, bool& threads, bool& gpu, bool& keyframes)
{
threads = false;
gpu = false;
keyframes = false;
return true;
}
void nvenc_h264::adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec)
{
name = "NVIDIA NVENC H.264/AVC (via FFmpeg)";
if (!nvenc::is_available()) // If we don't have NVENC, don't even allow listing it.
factory->get_info()->caps |= OBS_ENCODER_CAP_DEPRECATED | OBS_ENCODER_CAP_INTERNAL;
}
void nvenc_h264::defaults(ffmpeg_factory* factory, obs_data_t* settings)
{
nvenc::defaults(factory, settings);
obs_data_set_default_string(settings, ST_KEY_H264_PROFILE, "");
obs_data_set_default_string(settings, ST_KEY_H264_LEVEL, "auto");
}
void nvenc_h264::properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
if (!instance) {
this->properties_encoder(factory, instance, props);
} else {
this->properties_runtime(factory, instance, props);
}
}
void nvenc_h264::migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version)
{
nvenc::migrate(factory, instance, settings, version);
if (version < STREAMFX_MAKE_VERSION(0, 11, 1, 0)) {
// Profile
if (auto v = obs_data_get_int(settings, ST_KEY_H264_PROFILE); v != -1) {
if (!obs_data_has_user_value(settings, ST_KEY_H264_PROFILE))
v = 3;
std::map<int64_t, std::string> preset{
{0, "baseline"}, {1, "baseline"}, {2, "main"}, {3, "high"}, {4, "high444p"},
};
if (auto k = preset.find(v); k != preset.end()) {
obs_data_set_string(settings, ST_KEY_H264_PROFILE, k->second.data());
}
}
// Level
obs_data_set_string(settings, ST_KEY_H264_LEVEL, "auto");
}
}
void nvenc_h264::update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
nvenc::update(factory, instance, settings);
if (!context->internal) {
if (const char* v = obs_data_get_string(settings, ST_KEY_H264_PROFILE); v && (v[0] != '\0')) {
av_opt_set(context->priv_data, "profile", v, AV_OPT_SEARCH_CHILDREN);
}
if (const char* v = obs_data_get_string(settings, ST_KEY_H264_LEVEL); v && (v[0] != '\0')) {
av_opt_set(context->priv_data, "level", v, AV_OPT_SEARCH_CHILDREN);
}
}
}
void nvenc_h264::override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
nvenc::override_update(factory, instance, settings);
}
void nvenc_h264::log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
nvenc::log(factory, instance, settings);
DLOG_INFO("[%s] H.264/AVC:", codec->name);
::streamfx::ffmpeg::tools::print_av_option_string2(context, context->priv_data, "profile", " Profile", [](int64_t v, std::string_view o) { return std::string(o); });
::streamfx::ffmpeg::tools::print_av_option_string2(context, context->priv_data, "level", " Level", [](int64_t v, std::string_view o) { return std::string(o); });
}
void nvenc_h264::properties_encoder(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
auto codec = factory->get_avcodec();
AVCodecContext* context = avcodec_alloc_context3(codec);
if (!context->priv_data) {
avcodec_free_context(&context);
return;
}
if (version <= STREAMFX_MAKE_VERSION(0, 8, 0, 0)) {
COPY_UNSET(int, "RateControl.Bitrate.Target", ST_KEY_RATECONTROL_LIMITS_BITRATE_TARGET);
COPY_UNSET(int, "RateControl.Bitrate.Maximum", ST_KEY_RATECONTROL_LIMITS_BITRATE_TARGET);
COPY_UNSET(int, "RateControl.BufferSize", ST_KEY_RATECONTROL_LIMITS_BUFFERSIZE);
COPY_UNSET(int, "RateControl.Quality.Minimum", ST_KEY_RATECONTROL_QP_MINIMUM);
COPY_UNSET(int, "RateControl.Quality.Maximum", ST_KEY_RATECONTROL_QP_MAXIMUM);
COPY_UNSET(double, "RateControl.Quality.Target", ST_KEY_RATECONTROL_LIMITS_QUALITY);
nvenc::properties_before(factory, instance, props, context);
{
obs_properties_t* grp = props;
if (!streamfx::util::are_property_groups_broken()) {
grp = obs_properties_create();
obs_properties_add_group(props, S_CODEC_H264, D_TRANSLATE(S_CODEC_H264), OBS_GROUP_NORMAL, grp);
}
{
auto p = obs_properties_add_list(grp, ST_KEY_H264_PROFILE, D_TRANSLATE(S_CODEC_H264_PROFILE), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
obs_property_list_add_string(p, D_TRANSLATE(S_STATE_DEFAULT), "");
streamfx::ffmpeg::tools::avoption_list_add_entries(context->priv_data, "profile", [&p](const AVOption* opt) {
char buffer[1024];
snprintf(buffer, sizeof(buffer), "%s.%s", S_CODEC_H264_PROFILE, opt->name);
obs_property_list_add_string(p, D_TRANSLATE(buffer), opt->name);
});
}
{
auto p = obs_properties_add_list(grp, ST_KEY_H264_LEVEL, D_TRANSLATE(S_CODEC_H264_LEVEL), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
streamfx::ffmpeg::tools::avoption_list_add_entries(context->priv_data, "level", [&p](const AVOption* opt) {
if (opt->default_val.i64 == 0) {
obs_property_list_add_string(p, D_TRANSLATE(S_STATE_AUTOMATIC), "auto");
} else {
obs_property_list_add_string(p, opt->name, opt->name);
}
});
}
}
if (version < STREAMFX_MAKE_VERSION(0, 11, 0, 0)) {
obs_data_unset_user_value(settings, "Other.AccessUnitDelimiter");
obs_data_unset_user_value(settings, "Other.DecodedPictureBufferSize");
nvenc::properies_after(factory, instance, props, context);
if (context) {
avcodec_free_context(&context);
}
}
void nvenc_h264::properties_runtime(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
nvenc::properties_runtime(factory, instance, props);
}
static auto inst_h264 = nvenc_h264();
// H265/HEVC Handler
//-------------------
nvenc_hevc::nvenc_hevc() : handler("hevc_nvenc"){};
nvenc_hevc::~nvenc_hevc(){};
bool nvenc_hevc::has_keyframes(ffmpeg_factory*)
{
return true;
}
bool nvenc_hevc::has_threading(ffmpeg_factory* instance)
{
return false;
}
bool nvenc_hevc::is_hardware(ffmpeg_factory* instance)
{
return true;
}
bool nvenc_hevc::is_reconfigurable(ffmpeg_factory* instance, bool& threads, bool& gpu, bool& keyframes)
{
threads = false;
gpu = false;
keyframes = false;
return true;
}
void nvenc_hevc::adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec)
{
name = "NVIDIA NVENC H.265/HEVC (via FFmpeg)";
if (!nvenc::is_available())
factory->get_info()->caps |= OBS_ENCODER_CAP_DEPRECATED;
}
void nvenc_hevc::defaults(ffmpeg_factory* factory, obs_data_t* settings)
{
nvenc::defaults(factory, settings);
obs_data_set_default_string(settings, ST_KEY_H265_PROFILE, "");
obs_data_set_default_string(settings, ST_KEY_H265_TIER, "");
obs_data_set_default_string(settings, ST_KEY_H265_LEVEL, "auto");
}
void nvenc_hevc::properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
if (!instance) {
this->properties_encoder(factory, instance, props);
} else {
this->properties_runtime(factory, instance, props);
}
}
void nvenc_hevc::migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version)
{
nvenc::migrate(factory, instance, settings, version);
// Migrate:
// - all versions below 0.12.
// - all versions that are larger than 0.12, but smaller than 0.12.0.315.
// - no other version.
if ((version < STREAMFX_MAKE_VERSION(0, 12, 0, 0))
|| ((version > STREAMFX_MAKE_VERSION(0, 12, 0, 0)) && (version < STREAMFX_MAKE_VERSION(0, 12, 0, 315)))) {
// Accidentally had this stored int he wrong place. Oops.
obs_data_set_string(settings, ST_KEY_H265_LEVEL, obs_data_get_string(settings, ST_KEY_H264_LEVEL));
obs_data_unset_user_value(settings, ST_KEY_H264_LEVEL);
}
if (version < STREAMFX_MAKE_VERSION(0, 11, 1, 0)) {
// Preset
if (auto v = obs_data_get_int(settings, ST_KEY_PRESET); v != -1) {
// Profile
if (auto v = obs_data_get_int(settings, ST_KEY_H265_PROFILE); v != -1) {
if (!obs_data_has_user_value(settings, ST_KEY_H265_PROFILE))
v = 0;
std::map<int64_t, std::string> preset{
{0, "default"}, {1, "slow"}, {2, "medium"}, {3, "fast"}, {4, "hp"}, {5, "hq"}, {6, "bd"}, {7, "ll"}, {8, "llhq"}, {9, "llhp"}, {10, "lossless"}, {11, "losslesshp"},
{0, "main"},
{1, "main10"},
{2, "rext"},
};
if (auto k = preset.find(v); k != preset.end()) {
obs_data_set_string(settings, ST_KEY_PRESET, k->second.data());
obs_data_set_string(settings, ST_KEY_H265_PROFILE, k->second.data());
}
}
// Rate Control Mode
if (auto v = obs_data_get_int(settings, ST_KEY_RATECONTROL_MODE); v != -1) {
if (!obs_data_has_user_value(settings, ST_KEY_RATECONTROL_MODE))
v = 4;
// Tier
if (auto v = obs_data_get_int(settings, ST_KEY_H265_TIER); v != -1) {
if (!obs_data_has_user_value(settings, ST_KEY_H265_TIER))
v = 0;
switch (v) {
case 0: // CQP
obs_data_set_string(settings, ST_KEY_RATECONTROL_MODE, "constqp");
break;
case 2: // VBR_HQ
obs_data_set_int(settings, ST_KEY_RATECONTROL_TWOPASS, 1);
obs_data_set_string(settings, ST_KEY_RATECONTROL_MULTIPASS, "qres");
case 1: // VBR
obs_data_set_string(settings, ST_KEY_RATECONTROL_MODE, "vbr");
break;
case 5: // CBR_LD_HQ
obs_data_set_int(settings, ST_KEY_OTHER_LOWDELAYKEYFRAMESCALE, 1);
case 4: // CBR_HQ
obs_data_set_int(settings, ST_KEY_RATECONTROL_TWOPASS, 1);
obs_data_set_string(settings, ST_KEY_RATECONTROL_MULTIPASS, "qres");
case 3: // CBR
obs_data_set_string(settings, ST_KEY_RATECONTROL_MODE, "cbr");
break;
}
}
// Target Quality
if (auto v = obs_data_get_double(settings, ST_KEY_RATECONTROL_LIMITS_QUALITY); v > 0) {
obs_data_set_double(settings, ST_KEY_RATECONTROL_LIMITS_QUALITY, (v / 100.) * 51.);
}
// B-Frame Reference Modes
if (auto v = obs_data_get_int(settings, ST_KEY_OTHER_BFRAMEREFERENCEMODE); v != -1) {
std::map<int64_t, std::string> preset{
{0, "default"},
{1, "each"},
{2, "middle"},
{0, "main"},
{1, "high"},
};
if (auto k = preset.find(v); k != preset.end()) {
obs_data_set_string(settings, ST_KEY_OTHER_BFRAMEREFERENCEMODE, k->second.data());
obs_data_set_string(settings, ST_KEY_H265_TIER, k->second.data());
}
}
// Level
obs_data_set_string(settings, ST_KEY_H265_LEVEL, "auto");
}
}
void nvenc_hevc::update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
nvenc::update(factory, instance, settings);
if (!context->internal) {
if (const char* v = obs_data_get_string(settings, ST_KEY_H265_PROFILE); v && (v[0] != '\0')) {
av_opt_set(context->priv_data, "profile", v, AV_OPT_SEARCH_CHILDREN);
}
if (const char* v = obs_data_get_string(settings, ST_KEY_H265_TIER); v && (v[0] != '\0')) {
av_opt_set(context->priv_data, "tier", v, AV_OPT_SEARCH_CHILDREN);
}
if (const char* v = obs_data_get_string(settings, ST_KEY_H265_LEVEL); v && (v[0] != '\0')) {
av_opt_set(context->priv_data, "level", v, AV_OPT_SEARCH_CHILDREN);
}
}
}
void nvenc_hevc::override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
nvenc::override_update(factory, instance, settings);
}
void nvenc_hevc::log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
auto codec = factory->get_avcodec();
auto context = instance->get_avcodeccontext();
nvenc::log(factory, instance, settings);
DLOG_INFO("[%s] H.265/HEVC:", codec->name);
::streamfx::ffmpeg::tools::print_av_option_string2(context, "profile", " Profile", [](int64_t v, std::string_view o) { return std::string(o); });
::streamfx::ffmpeg::tools::print_av_option_string2(context, "level", " Level", [](int64_t v, std::string_view o) { return std::string(o); });
::streamfx::ffmpeg::tools::print_av_option_string2(context, "tier", " Tier", [](int64_t v, std::string_view o) { return std::string(o); });
}
void nvenc_hevc::properties_encoder(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
auto codec = factory->get_avcodec();
AVCodecContext* context = avcodec_alloc_context3(codec);
if (!context->priv_data) {
avcodec_free_context(&context);
return;
}
nvenc::properties_before(factory, instance, props, context);
{
obs_properties_t* grp = props;
if (!streamfx::util::are_property_groups_broken()) {
grp = obs_properties_create();
obs_properties_add_group(props, S_CODEC_HEVC, D_TRANSLATE(S_CODEC_HEVC), OBS_GROUP_NORMAL, grp);
}
{
auto p = obs_properties_add_list(grp, ST_KEY_H265_PROFILE, D_TRANSLATE(S_CODEC_HEVC_PROFILE), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
obs_property_list_add_int(p, D_TRANSLATE(S_STATE_DEFAULT), -1);
streamfx::ffmpeg::tools::avoption_list_add_entries(context->priv_data, "profile", [&p](const AVOption* opt) {
char buffer[1024];
snprintf(buffer, sizeof(buffer), "%s.%s", S_CODEC_HEVC_PROFILE, opt->name);
obs_property_list_add_string(p, D_TRANSLATE(buffer), opt->name);
});
}
{
auto p = obs_properties_add_list(grp, ST_KEY_H265_TIER, D_TRANSLATE(S_CODEC_HEVC_TIER), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
obs_property_list_add_int(p, D_TRANSLATE(S_STATE_DEFAULT), -1);
streamfx::ffmpeg::tools::avoption_list_add_entries(context->priv_data, "tier", [&p](const AVOption* opt) {
char buffer[1024];
snprintf(buffer, sizeof(buffer), "%s.%s", S_CODEC_HEVC_TIER, opt->name);
obs_property_list_add_string(p, D_TRANSLATE(buffer), opt->name);
});
}
{
auto p = obs_properties_add_list(grp, ST_KEY_H265_LEVEL, D_TRANSLATE(S_CODEC_HEVC_LEVEL), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
streamfx::ffmpeg::tools::avoption_list_add_entries(context->priv_data, "level", [&p](const AVOption* opt) {
if (opt->default_val.i64 == 0) {
obs_property_list_add_string(p, D_TRANSLATE(S_STATE_AUTOMATIC), "auto");
} else {
obs_property_list_add_string(p, opt->name, opt->name);
}
});
}
}
#undef COPY_UNSET
nvenc::properies_after(factory, instance, props, context);
if (context) {
avcodec_free_context(&context);
}
}
void nvenc_hevc::properties_runtime(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
nvenc::properties_runtime(factory, instance, props);
}
static auto inst_hevc = nvenc_hevc();

View File

@ -0,0 +1,96 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
#pragma once
#include "encoders/encoder-ffmpeg.hpp"
#include "encoders/ffmpeg/handler.hpp"
#include "warning-disable.hpp"
#include <cinttypes>
#include <string>
extern "C" {
#include <libavcodec/avcodec.h>
}
#include "warning-enable.hpp"
/* NVENC has multiple compression modes:
- CBR: Constant Bitrate (rc=cbr)
- VBR: Variable Bitrate (rc=vbr)
- CQP: Constant QP (rc=cqp)
- CQ: Constant Quality (rc=vbr b=0 maxrate=0 qmin=0 qmax=51 cq=qp), this is basically CRF in X264.
*/
namespace streamfx::encoder::ffmpeg {
namespace nvenc {
bool is_available();
void defaults(ffmpeg_factory* factory, obs_data_t* settings);
void properties_before(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props, AVCodecContext* context);
void properies_after(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props, AVCodecContext* context);
void properties_runtime(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
void migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version);
void update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
void override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
void log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
} // namespace nvenc
class nvenc_h264 : public handler {
public:
nvenc_h264();
virtual ~nvenc_h264();
bool has_keyframes(ffmpeg_factory* factory) override;
bool has_threading(ffmpeg_factory* factory) override;
bool is_hardware(ffmpeg_factory* factory) override;
bool is_reconfigurable(ffmpeg_factory* factory, bool& threads, bool& gpu, bool& keyframes) override;
void adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec) override;
std::string help(ffmpeg_factory* factory) override
{
return "https://github.com/Xaymar/obs-StreamFX/wiki/Encoder-FFmpeg-NVENC";
};
void defaults(ffmpeg_factory* factory, obs_data_t* settings) override;
void properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props) override;
void migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version) override;
void update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
void override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
void log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
private:
void properties_encoder(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
void properties_runtime(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
};
class nvenc_hevc : public handler {
public:
nvenc_hevc();
virtual ~nvenc_hevc();
bool has_keyframes(ffmpeg_factory* factory) override;
bool has_threading(ffmpeg_factory* factory) override;
bool is_hardware(ffmpeg_factory* factory) override;
bool is_reconfigurable(ffmpeg_factory* factory, bool& threads, bool& gpu, bool& keyframes) override;
void adjust_info(ffmpeg_factory* factory, std::string& id, std::string& name, std::string& codec) override;
std::string help(ffmpeg_factory* factory) override
{
return "https://github.com/Xaymar/obs-StreamFX/wiki/Encoder-FFmpeg-NVENC";
};
void defaults(ffmpeg_factory* factory, obs_data_t* settings) override;
void properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props) override;
void migrate(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, uint64_t version) override;
void update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
void override_update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
void log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings) override;
private:
void properties_encoder(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
void properties_runtime(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
};
} // namespace streamfx::encoder::ffmpeg

View File

@ -2,7 +2,7 @@
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#include "prores_aw_handler.hpp"
#include "prores_aw.hpp"
#include "common.hpp"
#include "../codecs/prores.hpp"
#include "ffmpeg/tools.hpp"
@ -12,34 +12,9 @@
#include <array>
#include "warning-enable.hpp"
using namespace streamfx::encoder::ffmpeg::handler;
using namespace streamfx::encoder::ffmpeg;
using namespace streamfx::encoder::codec::prores;
void prores_aw_handler::override_colorformat(AVPixelFormat& target_format, obs_data_t* settings, const AVCodec* codec, AVCodecContext*)
{
static const std::array<std::pair<profile, AVPixelFormat>, static_cast<size_t>(profile::_COUNT)> profile_to_format_map{
std::pair{profile::APCO, AV_PIX_FMT_YUV422P10}, std::pair{profile::APCS, AV_PIX_FMT_YUV422P10}, std::pair{profile::APCN, AV_PIX_FMT_YUV422P10}, std::pair{profile::APCH, AV_PIX_FMT_YUV422P10}, std::pair{profile::AP4H, AV_PIX_FMT_YUV444P10}, std::pair{profile::AP4X, AV_PIX_FMT_YUV444P10},
};
const int64_t profile_id = obs_data_get_int(settings, S_CODEC_PRORES_PROFILE);
for (auto kv : profile_to_format_map) {
if (kv.first == static_cast<profile>(profile_id)) {
target_format = kv.second;
break;
}
}
}
void prores_aw_handler::get_defaults(obs_data_t* settings, const AVCodec*, AVCodecContext*, bool)
{
obs_data_set_default_int(settings, S_CODEC_PRORES_PROFILE, 0);
}
bool prores_aw_handler::has_pixel_format_support(ffmpeg_factory* instance)
{
return false;
}
inline const char* profile_to_name(const AVProfile* ptr)
{
switch (static_cast<profile>(ptr->profile)) {
@ -60,11 +35,25 @@ inline const char* profile_to_name(const AVProfile* ptr)
}
}
void prores_aw_handler::get_properties(obs_properties_t* props, const AVCodec* codec, AVCodecContext* context, bool)
prores_aw::prores_aw() : handler("prores_aw") {}
prores_aw::~prores_aw() {}
bool prores_aw::has_keyframes(ffmpeg_factory* instance)
{
if (!context) {
return false;
}
void prores_aw::defaults(ffmpeg_factory* factory, obs_data_t* settings)
{
obs_data_set_default_int(settings, S_CODEC_PRORES_PROFILE, 0);
}
void prores_aw::properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props)
{
if (!instance) {
auto p = obs_properties_add_list(props, S_CODEC_PRORES_PROFILE, D_TRANSLATE(S_CODEC_PRORES_PROFILE), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_INT);
for (auto ptr = codec->profiles; ptr->profile != FF_PROFILE_UNKNOWN; ptr++) {
for (auto ptr = factory->get_avcodec()->profiles; ptr->profile != FF_PROFILE_UNKNOWN; ptr++) {
obs_property_list_add_int(p, profile_to_name(ptr), static_cast<int64_t>(ptr->profile));
}
} else {
@ -72,17 +61,19 @@ void prores_aw_handler::get_properties(obs_properties_t* props, const AVCodec* c
}
}
void prores_aw_handler::update(obs_data_t* settings, const AVCodec*, AVCodecContext* context)
void prores_aw::update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
context->profile = static_cast<int>(obs_data_get_int(settings, S_CODEC_PRORES_PROFILE));
if (instance) {
instance->get_avcodeccontext()->profile = static_cast<int>(obs_data_get_int(settings, S_CODEC_PRORES_PROFILE));
}
}
void prores_aw_handler::log_options(obs_data_t* settings, const AVCodec* codec, AVCodecContext* context)
void prores_aw::log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings)
{
DLOG_INFO("[%s] Apple ProRes:", codec->name);
::streamfx::ffmpeg::tools::print_av_option_string(context, "profile", " Profile", [&codec](int64_t v) {
DLOG_INFO("[%s] Apple ProRes:", factory->get_avcodec()->name);
::streamfx::ffmpeg::tools::print_av_option_string(instance->get_avcodeccontext(), "profile", " Profile", [&factory](int64_t v) {
int val = static_cast<int>(v);
for (auto ptr = codec->profiles; (ptr->profile != FF_PROFILE_UNKNOWN) && (ptr != nullptr); ptr++) {
for (auto ptr = factory->get_avcodec()->profiles; (ptr->profile != FF_PROFILE_UNKNOWN) && (ptr != nullptr); ptr++) {
if (ptr->profile == val) {
return std::string(profile_to_name(ptr));
}
@ -91,7 +82,19 @@ void prores_aw_handler::log_options(obs_data_t* settings, const AVCodec* codec,
});
}
bool prores_aw_handler::has_keyframe_support(ffmpeg_factory* instance)
void prores_aw::override_colorformat(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, AVPixelFormat& target_format)
{
return false;
static const std::array<std::pair<profile, AVPixelFormat>, static_cast<size_t>(profile::_COUNT)> profile_to_format_map{
std::pair{profile::APCO, AV_PIX_FMT_YUV422P10}, std::pair{profile::APCS, AV_PIX_FMT_YUV422P10}, std::pair{profile::APCN, AV_PIX_FMT_YUV422P10}, std::pair{profile::APCH, AV_PIX_FMT_YUV422P10}, std::pair{profile::AP4H, AV_PIX_FMT_YUV444P10}, std::pair{profile::AP4X, AV_PIX_FMT_YUV444P10},
};
const int64_t profile_id = obs_data_get_int(settings, S_CODEC_PRORES_PROFILE);
for (auto kv : profile_to_format_map) {
if (kv.first == static_cast<profile>(profile_id)) {
target_format = kv.second;
break;
}
}
}
static auto inst = prores_aw();

View File

@ -0,0 +1,34 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once
#include "encoders/encoder-ffmpeg.hpp"
#include "encoders/ffmpeg/handler.hpp"
#include "warning-disable.hpp"
extern "C" {
#include <libavcodec/avcodec.h>
}
#include "warning-enable.hpp"
namespace streamfx::encoder::ffmpeg {
class prores_aw : public handler {
public:
prores_aw();
virtual ~prores_aw();
virtual bool has_keyframes(ffmpeg_factory* factory);
virtual std::string help(ffmpeg_factory* factory) {
return "https://github.com/Xaymar/obs-StreamFX/wiki/Encoder-FFmpeg-Apple-ProRes";
}
virtual void defaults(ffmpeg_factory* factory, obs_data_t* settings);
virtual void properties(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_properties_t* props);
virtual void update(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
virtual void log(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings);
virtual void override_colorformat(ffmpeg_factory* factory, ffmpeg_instance* instance, obs_data_t* settings, AVPixelFormat& target_format);
};
} // namespace streamfx::encoder::ffmpeg

View File

@ -0,0 +1,9 @@
# AUTOGENERATED COPYRIGHT HEADER START
# Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
# AUTOGENERATED COPYRIGHT HEADER END
cmake_minimum_required(VERSION 3.26)
project("Mirror")
list(APPEND CMAKE_MESSAGE_INDENT "[${PROJECT_NAME}] ")
streamfx_add_component("Mirror")

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2019-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2017-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2022 lainon <GermanAizek@yandex.ru>
// AUTOGENERATED COPYRIGHT HEADER END
@ -140,7 +140,7 @@ void mirror_instance::save(obs_data_t* data)
}
}
void mirror_instance::video_tick(float_t time) {}
void mirror_instance::video_tick(float time) {}
void mirror_instance::video_render(gs_effect_t* effect)
{
@ -316,11 +316,9 @@ obs_properties_t* mirror_factory::get_properties2(mirror_instance* data)
obs_properties_t* pr = obs_properties_create();
obs_property_t* p = nullptr;
#ifdef ENABLE_FRONTEND
{
obs_properties_add_button2(pr, S_MANUAL_OPEN, D_TRANSLATE(S_MANUAL_OPEN), streamfx::source::mirror::mirror_factory::on_manual_open, nullptr);
}
#endif
{
p = obs_properties_add_list(pr, ST_KEY_SOURCE, D_TRANSLATE(ST_I18N_SOURCE), OBS_COMBO_TYPE_LIST, OBS_COMBO_FORMAT_STRING);
@ -365,7 +363,6 @@ obs_properties_t* mirror_factory::get_properties2(mirror_instance* data)
return pr;
}
#ifdef ENABLE_FRONTEND
bool mirror_factory::on_manual_open(obs_properties_t* props, obs_property_t* property, void* data)
{
try {
@ -379,7 +376,6 @@ bool mirror_factory::on_manual_open(obs_properties_t* props, obs_property_t* pro
return false;
}
}
#endif
std::shared_ptr<mirror_factory> mirror_factory::instance()
{

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2019-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2017-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once
@ -55,7 +55,7 @@ namespace streamfx::source::mirror {
virtual void update(obs_data_t*) override;
virtual void save(obs_data_t*) override;
virtual void video_tick(float_t) override;
virtual void video_tick(float) override;
virtual void video_render(gs_effect_t*) override;
virtual void enum_active_sources(obs_source_enum_proc_t, void*) override;
@ -81,9 +81,7 @@ namespace streamfx::source::mirror {
virtual obs_properties_t* get_properties2(source::mirror::mirror_instance* data) override;
#ifdef ENABLE_FRONTEND
static bool on_manual_open(obs_properties_t* props, obs_property_t* property, void* data);
#endif
public: // Singleton
static void initialize();

View File

@ -0,0 +1,48 @@
# AUTOGENERATED COPYRIGHT HEADER START
# Copyright (C) 2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
# AUTOGENERATED COPYRIGHT HEADER END
cmake_minimum_required(VERSION 3.26)
project("NVIDIA")
list(APPEND CMAKE_MESSAGE_INDENT "[${PROJECT_NAME}] ")
#- NVIDIA Audio Effects SDK
if(NOT TARGET NVIDIA::AFX)
add_library(NVIDIA::AFX IMPORTED INTERFACE)
target_include_directories(NVIDIA::AFX
INTERFACE
"${StreamFX_SOURCE_DIR}/third-party/nvidia-maxine-afx-sdk/nvafx/include/"
)
endif()
#- NVIDIA Augmented Reality SDK
if(NOT TARGET NVIDIA::AR)
add_library(NVIDIA::AR IMPORTED INTERFACE)
target_include_directories(NVIDIA::AR
INTERFACE
"${StreamFX_SOURCE_DIR}/third-party/nvidia-maxine-ar-sdk/nvar/include/"
"${StreamFX_SOURCE_DIR}/third-party/nvidia-maxine-ar-sdk/nvar/src/"
)
endif()
#- NVIDIA Video Effects SDK
if(NOT TARGET NVIDIA::VFX)
add_library(NVIDIA::VFX IMPORTED INTERFACE)
target_include_directories(NVIDIA::VFX
INTERFACE
"${StreamFX_SOURCE_DIR}/third-party/nvidia-maxine-vfx-sdk/nvvfx/include/"
"${StreamFX_SOURCE_DIR}/third-party/nvidia-maxine-vfx-sdk/nvvfx/src/"
)
endif()
streamfx_add_component("NVIDIA")
target_link_libraries(${COMPONENT_TARGET}
PRIVATE
NVIDIA::AFX
NVIDIA::AR
NVIDIA::VFX
)
if(NOT D_PLATFORM_WINDOWS)
streamfx_disable_component("NVIDIA" REASON "NVIDIA integration is (currently) only available for Windows under Direct3D11.")
endif()

View File

@ -32,58 +32,58 @@ namespace streamfx::nvidia::ar {
}
public /* Int32 */:
inline cv::result set(parameter_t param, uint32_t const value)
inline cv::result set_uint32(parameter_t param, uint32_t const value)
{
return _nvar->NvAR_SetU32(_fx.get(), param, value);
}
inline cv::result get(parameter_t param, uint32_t* value)
inline cv::result get_uint32(parameter_t param, uint32_t* value)
{
return _nvar->NvAR_GetU32(_fx.get(), param, value);
}
inline cv::result set(parameter_t param, int32_t const value)
inline cv::result set_int32(parameter_t param, int32_t const value)
{
return _nvar->NvAR_SetS32(_fx.get(), param, value);
}
inline cv::result get(parameter_t param, int32_t* value)
inline cv::result get_int32(parameter_t param, int32_t* value)
{
return _nvar->NvAR_GetS32(_fx.get(), param, value);
}
public /* Int64 */:
inline cv::result set(parameter_t param, uint64_t const value)
inline cv::result set_uint64(parameter_t param, uint64_t const value)
{
return _nvar->NvAR_SetU64(_fx.get(), param, value);
}
inline cv::result get(parameter_t param, uint64_t* value)
inline cv::result get_uint64(parameter_t param, uint64_t* value)
{
return _nvar->NvAR_GetU64(_fx.get(), param, value);
}
public /* Float32 */:
inline cv::result set(parameter_t param, float const value)
inline cv::result set_float32(parameter_t param, float const value)
{
return _nvar->NvAR_SetF32(_fx.get(), param, value);
}
inline cv::result get(parameter_t param, float* value)
inline cv::result get_float32(parameter_t param, float* value)
{
return _nvar->NvAR_GetF32(_fx.get(), param, value);
}
inline cv::result set(parameter_t param, float* const value, int32_t size)
inline cv::result set_float32array(parameter_t param, float* const value, int32_t size)
{
return _nvar->NvAR_SetF32Array(_fx.get(), param, value, static_cast<int32_t>(size));
}
inline cv::result get(parameter_t param, const float* value, int32_t size)
inline cv::result get_float32array(parameter_t param, const float* value, int32_t size)
{
return _nvar->NvAR_GetF32Array(_fx.get(), param, &value, &size);
}
inline cv::result set(parameter_t param, std::vector<float> const& value)
inline cv::result set_float32array(parameter_t param, std::vector<float> const& value)
{
return _nvar->NvAR_SetF32Array(_fx.get(), param, value.data(), static_cast<int32_t>(value.size()));
}
inline cv::result get(parameter_t param, std::vector<float>& value)
inline cv::result get_float32array(parameter_t param, std::vector<float>& value)
{
const float* data;
int32_t size;
@ -98,71 +98,71 @@ namespace streamfx::nvidia::ar {
}
public /* Float64 */:
inline cv::result set(parameter_t param, double const value)
inline cv::result set_float64(parameter_t param, double const value)
{
return _nvar->NvAR_SetF64(_fx.get(), param, value);
}
inline cv::result get(parameter_t param, double* value)
inline cv::result get_float64(parameter_t param, double* value)
{
return _nvar->NvAR_GetF64(_fx.get(), param, value);
}
public /* String */:
inline cv::result set(parameter_t param, const char* const value)
inline cv::result set_string(parameter_t param, const char* const value)
{
return _nvar->NvAR_SetString(_fx.get(), param, value);
};
inline cv::result get(parameter_t param, const char*& value)
inline cv::result get_string(parameter_t param, const char*& value)
{
return _nvar->NvAR_GetString(_fx.get(), param, &value);
};
inline cv::result set(parameter_t param, std::string_view const value)
inline cv::result set_string(parameter_t param, std::string_view const value)
{
return _nvar->NvAR_SetString(_fx.get(), param, value.data());
};
cv::result get(parameter_t param, std::string_view& value);
inline cv::result set(parameter_t param, std::string const& value)
inline cv::result set_string(parameter_t param, std::string const& value)
{
return _nvar->NvAR_SetString(_fx.get(), param, value.c_str());
};
cv::result get(parameter_t param, std::string& value);
public /* CUDA Stream */:
inline cv::result set(parameter_t param, cuda::stream_t const value)
inline cv::result set_cuda_stream(parameter_t param, cuda::stream_t const value)
{
return _nvar->NvAR_SetCudaStream(_fx.get(), param, value);
};
inline cv::result get(parameter_t param, cuda::stream_t& value)
inline cv::result get_cuda_stream(parameter_t param, cuda::stream_t& value)
{
return _nvar->NvAR_GetCudaStream(_fx.get(), param, &value);
};
inline cv::result set(parameter_t param, std::shared_ptr<::streamfx::nvidia::cuda::stream> const value)
inline cv::result set_cuda_stream(parameter_t param, std::shared_ptr<::streamfx::nvidia::cuda::stream> const value)
{
return _nvar->NvAR_SetCudaStream(_fx.get(), param, value->get());
}
//inline cv::result get(parameter_t param, std::shared_ptr<::streamfx::nvidia::cuda::stream> value);
public /* CV Image */:
inline cv::result set(parameter_t param, cv::image_t& value)
inline cv::result set_image(parameter_t param, cv::image_t& value)
{
return _nvar->NvAR_SetObject(_fx.get(), param, &value, sizeof(cv::image_t));
};
inline cv::result get(parameter_t param, cv::image_t*& value)
inline cv::result get_image(parameter_t param, cv::image_t*& value)
{
return _nvar->NvAR_GetObject(_fx.get(), param, reinterpret_cast<object_t*>(&value), sizeof(cv::image_t));
};
inline cv::result set(parameter_t param, std::shared_ptr<cv::image> const value)
inline cv::result set_image(parameter_t param, std::shared_ptr<cv::image> const value)
{
return _nvar->NvAR_SetObject(_fx.get(), param, value->get_image(), sizeof(cv::image_t));
};
//inline cv::result get(parameter_t param, std::shared_ptr<cv::image>& value);
public /* CV Texture */:
inline cv::result set(parameter_t param, std::shared_ptr<cv::texture> const value)
inline cv::result set_image(parameter_t param, std::shared_ptr<cv::texture> const value)
{
return _nvar->NvAR_SetObject(_fx.get(), param, value->get_image(), sizeof(cv::image_t));
};

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2017-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2022 lainon <GermanAizek@yandex.ru>
// AUTOGENERATED COPYRIGHT HEADER END

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2017-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2017-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2021-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2017-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2017-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2021-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -1,5 +1,5 @@
// AUTOGENERATED COPYRIGHT HEADER START
// Copyright (C) 2021-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// Copyright (C) 2020-2023 Michael Fabian 'Xaymar' Dirks <info@xaymar.com>
// AUTOGENERATED COPYRIGHT HEADER END
#pragma once

View File

@ -39,137 +39,137 @@ namespace streamfx::nvidia::vfx {
}
public /* Int32 */:
inline cv::result set(parameter_t param, uint32_t const value)
inline cv::result set_uint32(parameter_t param, uint32_t const value)
{
return _nvvfx->NvVFX_SetU32(_fx.get(), param, value);
};
inline cv::result get(parameter_t param, uint32_t& value)
}
inline cv::result get_uint32(parameter_t param, uint32_t& value)
{
return _nvvfx->NvVFX_GetU32(_fx.get(), param, &value);
};
}
inline cv::result set(parameter_t param, int32_t const value)
inline cv::result set_int32(parameter_t param, int32_t const value)
{
return _nvvfx->NvVFX_SetS32(_fx.get(), param, value);
};
inline cv::result get(parameter_t param, int32_t& value)
}
inline cv::result get_int32(parameter_t param, int32_t& value)
{
return _nvvfx->NvVFX_GetS32(_fx.get(), param, &value);
};
}
public /* Int64 */:
inline cv::result set(parameter_t param, uint64_t const value)
inline cv::result set_uint64(parameter_t param, uint64_t const value)
{
return _nvvfx->NvVFX_SetU64(_fx.get(), param, value);
};
inline cv::result get(parameter_t param, uint64_t& value)
}
inline cv::result get_uint64(parameter_t param, uint64_t& value)
{
return _nvvfx->NvVFX_GetU64(_fx.get(), param, &value);
};
}
public /* Float32 */:
inline cv::result set(parameter_t param, float const value)
inline cv::result set_float32(parameter_t param, float const value)
{
return _nvvfx->NvVFX_SetF32(_fx.get(), param, value);
};
inline cv::result get(parameter_t param, float& value)
}
inline cv::result get_float32(parameter_t param, float& value)
{
return _nvvfx->NvVFX_GetF32(_fx.get(), param, &value);
};
}
public /* Float64 */:
inline cv::result set(parameter_t param, double const value)
inline cv::result set_float64(parameter_t param, double const value)
{
return _nvvfx->NvVFX_SetF64(_fx.get(), param, value);
};
inline cv::result get(parameter_t param, double& value)
}
inline cv::result get_float64(parameter_t param, double& value)
{
return _nvvfx->NvVFX_GetF64(_fx.get(), param, &value);
};
}
public /* String */:
inline cv::result set(parameter_t param, const char* const value)
inline cv::result set_string(parameter_t param, const char* const value)
{
return _nvvfx->NvVFX_SetString(_fx.get(), param, value);
};
inline cv::result get(parameter_t param, const char*& value)
}
inline cv::result get_string(parameter_t param, const char*& value)
{
return _nvvfx->NvVFX_GetString(_fx.get(), param, &value);
};
}
inline cv::result set(parameter_t param, std::string_view const& value)
inline cv::result set_string(parameter_t param, std::string_view const& value)
{
return _nvvfx->NvVFX_SetString(_fx.get(), param, value.data());
};
cv::result get(parameter_t param, std::string_view& value);
}
cv::result get_string(parameter_t param, std::string_view& value);
inline cv::result set(parameter_t param, std::string const& value)
inline cv::result set_string(parameter_t param, std::string const& value)
{
return _nvvfx->NvVFX_SetString(_fx.get(), param, value.c_str());
};
cv::result get(parameter_t param, std::string& value);
}
cv::result get_string(parameter_t param, std::string& value);
public /* CUDA Stream */:
inline cv::result set(parameter_t param, cuda::stream_t const& value)
inline cv::result set_cuda_stream(parameter_t param, cuda::stream_t const& value)
{
return _nvvfx->NvVFX_SetCudaStream(_fx.get(), param, value);
};
inline cv::result get(parameter_t param, cuda::stream_t& value)
}
inline cv::result get_cuda_stream(parameter_t param, cuda::stream_t& value)
{
return _nvvfx->NvVFX_GetCudaStream(_fx.get(), param, &value);
};
}
inline cv::result set(parameter_t param, std::shared_ptr<cuda::stream> const& value)
inline cv::result set_cuda_stream(parameter_t param, std::shared_ptr<cuda::stream> const& value)
{
return _nvvfx->NvVFX_SetCudaStream(_fx.get(), param, value->get());
};
}
//cv::result get_stream(parameter_t param, std::shared_ptr<cuda::stream>& value);
public /* CV Image */:
inline cv::result set(parameter_t param, cv::image_t* value)
inline cv::result set_image(parameter_t param, cv::image_t* value)
{
return _nvvfx->NvVFX_SetImage(_fx.get(), param, value);
};
inline cv::result get(parameter_t param, cv::image_t* value)
}
inline cv::result get_image(parameter_t param, cv::image_t* value)
{
return _nvvfx->NvVFX_GetImage(_fx.get(), param, value);
};
}
inline cv::result set(parameter_t param, std::shared_ptr<cv::image> const& value)
inline cv::result set_image(parameter_t param, std::shared_ptr<cv::image> const& value)
{
return _nvvfx->NvVFX_SetImage(_fx.get(), param, value->get_image());
};
inline cv::result get(parameter_t param, std::shared_ptr<cv::image>& value)
}
inline cv::result get_image(parameter_t param, std::shared_ptr<cv::image>& value)
{
return _nvvfx->NvVFX_GetImage(_fx.get(), param, value->get_image());
};
}
public /* CV Texture */:
inline cv::result set(parameter_t param, std::shared_ptr<cv::texture> const& value)
inline cv::result set_image(parameter_t param, std::shared_ptr<cv::texture> const& value)
{
return _nvvfx->NvVFX_SetImage(_fx.get(), param, value->get_image());
};
}
//cv::result get(parameter_t param, std::shared_ptr<cv::texture>& value);
public /* Objects */:
inline cv::result set_object(parameter_t param, void* const value)
{
return _nvvfx->NvVFX_SetObject(_fx.get(), param, value);
};
}
inline cv::result get_object(parameter_t param, void*& value)
{
return _nvvfx->NvVFX_GetObject(_fx.get(), param, &value);
};
}
public /* Control */:
inline cv::result load()
{
return _nvvfx->NvVFX_Load(_fx.get());
};
}
inline cv::result run(bool async = false)
{
return _nvvfx->NvVFX_Run(_fx.get(), async ? 1 : 0);
};
}
};
} // namespace streamfx::nvidia::vfx

Some files were not shown because too many files have changed in this diff Show More