Compare commits

...

21 Commits

Author SHA1 Message Date
b5d0d16d31 enable toxav in cd
Some checks are pending
ContinuousDelivery / linux-ubuntu (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousDelivery / windows (push) Waiting to run
ContinuousDelivery / windows-asan (push) Waiting to run
ContinuousDelivery / release (push) Blocked by required conditions
ContinuousIntegration / linux (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousIntegration / macos (push) Waiting to run
ContinuousIntegration / windows (push) Waiting to run
2024-10-01 21:27:41 +02:00
0039340fd5 further small reframer fixes, workaround distortion bug by wrapping sdl
input with reframer (magic fix, someone pls tell my why)
2024-10-01 18:30:20 +02:00
45e6fe0033 toxav param to flake 2024-10-01 12:42:49 +02:00
84c48d7f5a add simple reframer tests (no errors found) 2024-10-01 12:05:08 +02:00
acbc1552eb refactor pop reframer 2024-10-01 11:39:26 +02:00
9501292fc9 accept call 2024-10-01 11:13:27 +02:00
a1d3e0a480 improve src filling with lookup table 2024-09-30 12:37:40 +02:00
0886e9c8ef fix toxav interval (sad) 2024-09-30 00:10:04 +02:00
064106c6b2 add audio incoming source 2024-09-30 00:10:04 +02:00
06c7c1fa37 add broken reframer and voip changes 2024-09-30 00:10:04 +02:00
472615a31f wip toxav voip model (only asink and outgoing call and missing reframer) 2024-09-30 00:10:04 +02:00
0acabf70b7 voip model draft 1 2024-09-30 00:10:03 +02:00
d8a58ee286 amends to prev commit (oops)
Some checks failed
ContinuousDelivery / linux-ubuntu (push) Has been cancelled
ContinuousDelivery / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Has been cancelled
ContinuousDelivery / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Has been cancelled
ContinuousDelivery / windows (push) Has been cancelled
ContinuousDelivery / windows-asan (push) Has been cancelled
ContinuousIntegration / linux (push) Has been cancelled
ContinuousIntegration / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Has been cancelled
ContinuousIntegration / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Has been cancelled
ContinuousIntegration / macos (push) Has been cancelled
ContinuousIntegration / windows (push) Has been cancelled
ContinuousDelivery / release (push) Has been cancelled
2024-09-30 00:08:56 +02:00
28be54ac97 fix audo connection for sinks and add a try catch block for the file sorting
Some checks are pending
ContinuousDelivery / linux-ubuntu (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousDelivery / windows (push) Waiting to run
ContinuousDelivery / windows-asan (push) Waiting to run
ContinuousDelivery / release (push) Blocked by required conditions
ContinuousIntegration / linux (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousIntegration / macos (push) Waiting to run
ContinuousIntegration / windows (push) Waiting to run
2024-09-29 18:24:34 +02:00
ce6febdc29 dvt default output
Some checks are pending
ContinuousDelivery / linux-ubuntu (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousDelivery / windows (push) Waiting to run
ContinuousDelivery / windows-asan (push) Waiting to run
ContinuousDelivery / release (push) Blocked by required conditions
ContinuousIntegration / linux (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousIntegration / macos (push) Waiting to run
ContinuousIntegration / windows (push) Waiting to run
2024-09-29 13:09:04 +02:00
3d8deb310e implement stream default src/sink
Some checks are pending
ContinuousDelivery / linux-ubuntu (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousDelivery / windows (push) Waiting to run
ContinuousDelivery / windows-asan (push) Waiting to run
ContinuousDelivery / release (push) Blocked by required conditions
ContinuousIntegration / linux (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousIntegration / macos (push) Waiting to run
ContinuousIntegration / windows (push) Waiting to run
2024-09-28 19:16:57 +02:00
248b00dafb add video frame type and debug viewer and debug test source
Some checks are pending
ContinuousDelivery / linux-ubuntu (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousDelivery / windows (push) Waiting to run
ContinuousDelivery / windows-asan (push) Waiting to run
ContinuousDelivery / release (push) Blocked by required conditions
ContinuousIntegration / linux (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousIntegration / macos (push) Waiting to run
ContinuousIntegration / windows (push) Waiting to run
the test source thread will always exist for now
the debug view will open a window for each connection
2024-09-28 11:56:47 +02:00
59cdb2638f improve toxav module
Some checks are pending
ContinuousDelivery / linux-ubuntu (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousDelivery / windows (push) Waiting to run
ContinuousDelivery / windows-asan (push) Waiting to run
ContinuousDelivery / release (push) Blocked by required conditions
ContinuousIntegration / linux (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousIntegration / macos (push) Waiting to run
ContinuousIntegration / windows (push) Waiting to run
2024-09-27 22:37:06 +02:00
61b9044f94 add sdl audio input/output devices and add by default (if audio works)
Some checks are pending
ContinuousDelivery / linux-ubuntu (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousDelivery / windows (push) Waiting to run
ContinuousDelivery / windows-asan (push) Waiting to run
ContinuousDelivery / release (push) Blocked by required conditions
ContinuousIntegration / linux (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousIntegration / macos (push) Waiting to run
ContinuousIntegration / windows (push) Waiting to run
2024-09-27 17:38:14 +02:00
d89ab0bf42 add stream manager ui
Some checks are pending
ContinuousDelivery / linux-ubuntu (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousDelivery / windows (push) Waiting to run
ContinuousDelivery / windows-asan (push) Waiting to run
ContinuousDelivery / release (push) Blocked by required conditions
ContinuousIntegration / linux (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousIntegration / macos (push) Waiting to run
ContinuousIntegration / windows (push) Waiting to run
2024-09-27 16:05:16 +02:00
b899b8131e start porting frame streams
Some checks are pending
ContinuousDelivery / linux-ubuntu (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousDelivery / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousDelivery / windows (push) Waiting to run
ContinuousDelivery / windows-asan (push) Waiting to run
ContinuousDelivery / release (push) Blocked by required conditions
ContinuousIntegration / linux (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:arm64-v8a vcpkg_toolkit:arm64-android]) (push) Waiting to run
ContinuousIntegration / android (map[ndk_abi:x86_64 vcpkg_toolkit:x64-android]) (push) Waiting to run
ContinuousIntegration / macos (push) Waiting to run
ContinuousIntegration / windows (push) Waiting to run
2024-09-27 13:26:18 +02:00
29 changed files with 2982 additions and 82 deletions

View File

@ -22,10 +22,10 @@ jobs:
submodules: recursive submodules: recursive
- name: Install Dependencies - name: Install Dependencies
run: sudo apt update && sudo apt -y install libsodium-dev cmake run: sudo apt update && sudo apt -y install libsodium-dev cmake libvpx-dev libopus-dev
- name: Configure CMake - name: Configure CMake
run: cmake -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} run: cmake -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} -DTOMATO_TOX_AV=ON
- name: Build - name: Build
run: cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}} -j 4 -t tomato run: cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}} -j 4 -t tomato
@ -101,7 +101,7 @@ jobs:
- name: Configure CMake - name: Configure CMake
env: env:
ANDROID_NDK_HOME: ${{steps.setup_ndk.outputs.ndk-path}} ANDROID_NDK_HOME: ${{steps.setup_ndk.outputs.ndk-path}}
run: cmake -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} -DCMAKE_TOOLCHAIN_FILE=/usr/local/share/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=${{matrix.platform.vcpkg_toolkit}} -DANDROID=1 -DANDROID_PLATFORM=23 -DANDROID_ABI=${{matrix.platform.ndk_abi}} -DVCPKG_CHAINLOAD_TOOLCHAIN_FILE=${{steps.setup_ndk.outputs.ndk-path}}/build/cmake/android.toolchain.cmake -DSDLIMAGE_JPG_SHARED=OFF -DSDLIMAGE_PNG_SHARED=OFF -DTOMATO_MAIN_SO=ON run: cmake -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} -DCMAKE_TOOLCHAIN_FILE=/usr/local/share/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=${{matrix.platform.vcpkg_toolkit}} -DANDROID=1 -DANDROID_PLATFORM=23 -DANDROID_ABI=${{matrix.platform.ndk_abi}} -DVCPKG_CHAINLOAD_TOOLCHAIN_FILE=${{steps.setup_ndk.outputs.ndk-path}}/build/cmake/android.toolchain.cmake -DSDLIMAGE_JPG_SHARED=OFF -DSDLIMAGE_PNG_SHARED=OFF -DTOMATO_MAIN_SO=ON -DTOMATO_TOX_AV=ON
- name: Build (tomato) - name: Build (tomato)
run: cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}} -j 4 -t tomato run: cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}} -j 4 -t tomato
@ -164,7 +164,7 @@ jobs:
#- uses: ilammy/setup-nasm@v1 #- uses: ilammy/setup-nasm@v1
- name: Configure CMake - name: Configure CMake
run: cmake -G Ninja -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} -DCMAKE_TOOLCHAIN_FILE=C:/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static -DSDLIMAGE_VENDORED=ON -DSDLIMAGE_DEPS_SHARED=ON -DSDLIMAGE_JXL=OFF -DSDLIMAGE_AVIF=OFF -DPKG_CONFIG_EXECUTABLE=C:/vcpkg/installed/x64-windows/tools/pkgconf/pkgconf.exe run: cmake -G Ninja -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} -DCMAKE_TOOLCHAIN_FILE=C:/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static -DSDLIMAGE_VENDORED=ON -DSDLIMAGE_DEPS_SHARED=ON -DSDLIMAGE_JXL=OFF -DSDLIMAGE_AVIF=OFF -DPKG_CONFIG_EXECUTABLE=C:/vcpkg/installed/x64-windows/tools/pkgconf/pkgconf.exe -DTOMATO_TOX_AV=ON
- name: Build - name: Build
run: cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}} -t tomato run: cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}} -t tomato
@ -229,7 +229,7 @@ jobs:
#- uses: ilammy/setup-nasm@v1 #- uses: ilammy/setup-nasm@v1
- name: Configure CMake - name: Configure CMake
run: cmake -G Ninja -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} -DCMAKE_TOOLCHAIN_FILE=C:/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static -DTOMATO_ASAN=ON -DCMAKE_MSVC_RUNTIME_LIBRARY=MultiThreaded -DSDLIMAGE_VENDORED=ON -DSDLIMAGE_DEPS_SHARED=ON -DSDLIMAGE_JXL=OFF -DSDLIMAGE_AVIF=OFF -DPKG_CONFIG_EXECUTABLE=C:/vcpkg/installed/x64-windows/tools/pkgconf/pkgconf.exe run: cmake -G Ninja -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} -DCMAKE_TOOLCHAIN_FILE=C:/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static -DTOMATO_ASAN=ON -DCMAKE_MSVC_RUNTIME_LIBRARY=MultiThreaded -DSDLIMAGE_VENDORED=ON -DSDLIMAGE_DEPS_SHARED=ON -DSDLIMAGE_JXL=OFF -DSDLIMAGE_AVIF=OFF -DPKG_CONFIG_EXECUTABLE=C:/vcpkg/installed/x64-windows/tools/pkgconf/pkgconf.exe -DTOMATO_TOX_AV=ON
- name: Build - name: Build
run: cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}} -j 4 -t tomato run: cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}} -j 4 -t tomato

View File

@ -80,6 +80,7 @@
] ++ self.packages.${system}.default.dlopenBuildInputs; ] ++ self.packages.${system}.default.dlopenBuildInputs;
cmakeFlags = [ cmakeFlags = [
"-DTOMATO_TOX_AV=ON"
"-DTOMATO_ASAN=OFF" "-DTOMATO_ASAN=OFF"
"-DCMAKE_BUILD_TYPE=RelWithDebInfo" "-DCMAKE_BUILD_TYPE=RelWithDebInfo"
#"-DCMAKE_BUILD_TYPE=Debug" #"-DCMAKE_BUILD_TYPE=Debug"

View File

@ -102,12 +102,34 @@ target_sources(tomato PUBLIC
./chat_gui4.hpp ./chat_gui4.hpp
./chat_gui4.cpp ./chat_gui4.cpp
./frame_streams/frame_stream2.hpp
./frame_streams/audio_stream2.hpp
./frame_streams/stream_manager.hpp
./frame_streams/stream_manager.cpp
./frame_streams/locked_frame_stream.hpp
./frame_streams/multi_source.hpp
./frame_streams/voip_model.hpp
./frame_streams/sdl/sdl_audio2_frame_stream2.hpp
./frame_streams/sdl/sdl_audio2_frame_stream2.cpp
./frame_streams/sdl/video.hpp
./stream_manager_ui.hpp
./stream_manager_ui.cpp
./debug_video_tap.hpp
./debug_video_tap.cpp
) )
if (TOMATO_TOX_AV) if (TOMATO_TOX_AV)
target_sources(tomato PUBLIC target_sources(tomato PUBLIC
./tox_av.hpp ./tox_av.hpp
./tox_av.cpp ./tox_av.cpp
./tox_av_voip_model.hpp
./tox_av_voip_model.cpp
) )
target_compile_definitions(tomato PUBLIC TOMATO_TOX_AV) target_compile_definitions(tomato PUBLIC TOMATO_TOX_AV)
@ -147,3 +169,18 @@ target_link_libraries(tomato PUBLIC
set_target_properties(tomato PROPERTIES POSITION_INDEPENDENT_CODE ON) set_target_properties(tomato PROPERTIES POSITION_INDEPENDENT_CODE ON)
########################################
add_executable(test_frame_stream2_pop_reframer EXCLUDE_FROM_ALL
./frame_streams/frame_stream2.hpp
./frame_streams/audio_stream2.hpp
./frame_streams/locked_frame_stream.hpp
./frame_streams/multi_source.hpp
./frame_streams/test_pop_reframer.cpp
)
target_link_libraries(test_frame_stream2_pop_reframer
solanaceae_util
)

View File

@ -111,56 +111,60 @@ void FileSelector::render(void) {
} }
} }
// do sorting here try {
// TODO: cache the result (lol) // do sorting here
if (ImGuiTableSortSpecs* sorts_specs = ImGui::TableGetSortSpecs(); sorts_specs != nullptr && sorts_specs->SpecsCount >= 1) { // TODO: cache the result (lol)
switch (static_cast<SortID>(sorts_specs->Specs->ColumnUserID)) { if (ImGuiTableSortSpecs* sorts_specs = ImGui::TableGetSortSpecs(); sorts_specs != nullptr && sorts_specs->SpecsCount >= 1) {
break; case SortID::name: switch (static_cast<SortID>(sorts_specs->Specs->ColumnUserID)) {
if (sorts_specs->Specs->SortDirection == ImGuiSortDirection_Descending) { break; case SortID::name:
std::sort(dirs.begin(), dirs.end(), [](const auto& a, const auto& b) -> bool { if (sorts_specs->Specs->SortDirection == ImGuiSortDirection_Descending) {
return a.path() < b.path(); std::sort(dirs.begin(), dirs.end(), [](const auto& a, const auto& b) -> bool {
}); return a.path() < b.path();
std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool { });
return a.path().filename() < b.path().filename(); std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool {
}); return a.path().filename() < b.path().filename();
} else { });
std::sort(dirs.begin(), dirs.end(), [](const auto& a, const auto& b) -> bool { } else {
return a.path() > b.path(); std::sort(dirs.begin(), dirs.end(), [](const auto& a, const auto& b) -> bool {
}); return a.path() > b.path();
std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool { });
return a.path().filename() > b.path().filename(); std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool {
}); return a.path().filename() > b.path().filename();
} });
break; case SortID::size: }
if (sorts_specs->Specs->SortDirection == ImGuiSortDirection_Descending) { break; case SortID::size:
// TODO: sort dirs? if (sorts_specs->Specs->SortDirection == ImGuiSortDirection_Descending) {
std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool { // TODO: sort dirs?
return a.file_size() < b.file_size(); std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool {
}); return a.file_size() < b.file_size();
} else { });
// TODO: sort dirs? } else {
std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool { // TODO: sort dirs?
return a.file_size() > b.file_size(); std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool {
}); return a.file_size() > b.file_size();
} });
break; case SortID::date: }
if (sorts_specs->Specs->SortDirection == ImGuiSortDirection_Descending) { break; case SortID::date:
std::sort(dirs.begin(), dirs.end(), [](const auto& a, const auto& b) -> bool { if (sorts_specs->Specs->SortDirection == ImGuiSortDirection_Descending) {
return a.last_write_time() < b.last_write_time(); std::sort(dirs.begin(), dirs.end(), [](const auto& a, const auto& b) -> bool {
}); return a.last_write_time() < b.last_write_time();
std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool { });
return a.last_write_time() < b.last_write_time(); std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool {
}); return a.last_write_time() < b.last_write_time();
} else { });
std::sort(dirs.begin(), dirs.end(), [](const auto& a, const auto& b) -> bool { } else {
return a.last_write_time() > b.last_write_time(); std::sort(dirs.begin(), dirs.end(), [](const auto& a, const auto& b) -> bool {
}); return a.last_write_time() > b.last_write_time();
std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool { });
return a.last_write_time() > b.last_write_time(); std::sort(files.begin(), files.end(), [](const auto& a, const auto& b) -> bool {
}); return a.last_write_time() > b.last_write_time();
} });
break; default: ; }
break; default: ;
}
} }
} catch (...) {
// we likely saw a file disapear
} }
for (auto const& dir_entry : dirs) { for (auto const& dir_entry : dirs) {

View File

@ -9,9 +9,13 @@
#include <solanaceae/contact/components.hpp> #include <solanaceae/contact/components.hpp>
#include <solanaceae/util/utils.hpp> #include <solanaceae/util/utils.hpp>
#include "./frame_streams/voip_model.hpp"
// HACK: remove them // HACK: remove them
#include <solanaceae/tox_contacts/components.hpp> #include <solanaceae/tox_contacts/components.hpp>
#include <entt/entity/entity.hpp>
#include <imgui/imgui.h> #include <imgui/imgui.h>
#include <imgui/misc/cpp/imgui_stdlib.h> #include <imgui/misc/cpp/imgui_stdlib.h>
@ -30,6 +34,7 @@
#include <fstream> #include <fstream>
#include <iomanip> #include <iomanip>
#include <sstream> #include <sstream>
#include <string>
#include <variant> #include <variant>
namespace Components { namespace Components {
@ -257,6 +262,97 @@ float ChatGui4::render(float time_delta) {
if (ImGui::BeginChild(chat_label.c_str(), {0, 0}, ImGuiChildFlags_Border, ImGuiWindowFlags_MenuBar)) { if (ImGui::BeginChild(chat_label.c_str(), {0, 0}, ImGuiChildFlags_Border, ImGuiWindowFlags_MenuBar)) {
if (ImGui::BeginMenuBar()) { if (ImGui::BeginMenuBar()) {
// check if contact has voip model
// use activesessioncomp instead?
if (_cr.all_of<VoIPModelI*>(*_selected_contact)) {
if (ImGui::BeginMenu("VoIP")) {
auto* voip_model = _cr.get<VoIPModelI*>(*_selected_contact);
std::vector<ObjectHandle> contact_sessions;
std::vector<ObjectHandle> acceptable_sessions;
for (const auto& [ov, o_vm, sc] : _os.registry().view<VoIPModelI*, Components::VoIP::SessionContact>().each()) {
if (o_vm != voip_model) {
continue;
}
if (sc.c != *_selected_contact) {
continue;
}
auto o = _os.objectHandle(ov);
contact_sessions.push_back(o);
if (!o.all_of<Components::VoIP::Incoming>()) {
continue; // not incoming
}
// state is ringing/not yet accepted
const auto* session_state = o.try_get<Components::VoIP::SessionState>();
if (session_state == nullptr) {
continue;
}
if (session_state->state != Components::VoIP::SessionState::State::RINGING) {
continue;
}
acceptable_sessions.push_back(o);
}
static Components::VoIP::DefaultConfig g_default_connections{};
if (ImGui::BeginMenu("default connections")) {
ImGui::MenuItem("incoming audio", nullptr, &g_default_connections.incoming_audio);
ImGui::MenuItem("incoming video", nullptr, &g_default_connections.incoming_video);
ImGui::Separator();
ImGui::MenuItem("outgoing audio", nullptr, &g_default_connections.outgoing_audio);
ImGui::MenuItem("outgoing video", nullptr, &g_default_connections.outgoing_video);
ImGui::EndMenu();
}
if (acceptable_sessions.size() < 2) {
if (ImGui::MenuItem("accept call", nullptr, false, !acceptable_sessions.empty())) {
voip_model->accept(acceptable_sessions.front(), g_default_connections);
}
} else {
if (ImGui::BeginMenu("accept call", !acceptable_sessions.empty())) {
for (const auto o : acceptable_sessions) {
std::string label = "accept #";
label += std::to_string(entt::to_integral(entt::to_entity(o.entity())));
if (ImGui::MenuItem(label.c_str())) {
voip_model->accept(o, g_default_connections);
}
}
ImGui::EndMenu();
}
}
// TODO: disable if already in call?
if (ImGui::Button(" call ")) {
voip_model->enter(*_selected_contact, g_default_connections);
}
if (contact_sessions.size() < 2) {
if (ImGui::MenuItem("leave/reject call", nullptr, false, !contact_sessions.empty())) {
voip_model->leave(contact_sessions.front());
}
} else {
if (ImGui::BeginMenu("leave/reject call")) {
// list
for (const auto o : contact_sessions) {
std::string label = "end #";
label += std::to_string(entt::to_integral(entt::to_entity(o.entity())));
if (ImGui::MenuItem(label.c_str())) {
voip_model->leave(o);
}
}
ImGui::EndMenu();
}
}
ImGui::EndMenu();
}
}
if (ImGui::BeginMenu("debug")) { if (ImGui::BeginMenu("debug")) {
ImGui::Checkbox("show extra info", &_show_chat_extra_info); ImGui::Checkbox("show extra info", &_show_chat_extra_info);
ImGui::Checkbox("show avatar transfers", &_show_chat_avatar_tf); ImGui::Checkbox("show avatar transfers", &_show_chat_avatar_tf);

296
src/debug_video_tap.cpp Normal file
View File

@ -0,0 +1,296 @@
#include "./debug_video_tap.hpp"
#include <solanaceae/object_store/object_store.hpp>
#include <entt/entity/entity.hpp>
#include <SDL3/SDL.h>
#include <imgui/imgui.h>
#include "./frame_streams/sdl/video.hpp"
#include "./frame_streams/frame_stream2.hpp"
#include <string>
#include <memory>
#include <mutex>
#include <deque>
#include <thread>
#include <chrono>
#include <atomic>
#include <iostream>
// fwd
namespace Message {
uint64_t getTimeMS(void);
}
// threadsafe queue frame stream
// protected by a simple mutex lock
template<typename FrameType>
struct LockedFrameStream2 : public FrameStream2I<FrameType> {
std::mutex _lock;
std::deque<FrameType> _frames;
~LockedFrameStream2(void) {}
int32_t size(void) { return -1; }
std::optional<FrameType> pop(void) {
std::lock_guard lg{_lock};
if (_frames.empty()) {
return std::nullopt;
}
FrameType new_frame = std::move(_frames.front());
_frames.pop_front();
return std::move(new_frame);
}
bool push(const FrameType& value) {
std::lock_guard lg{_lock};
_frames.push_back(value);
return true;
}
};
struct DebugVideoTapSink : public FrameStream2SinkI<SDLVideoFrame> {
TextureUploaderI& _tu;
uint32_t _id_counter {0};
struct Writer {
struct View {
uint32_t _id {0}; // for stable imgui ids
uint64_t _tex {0};
uint32_t _tex_w {0};
uint32_t _tex_h {0};
bool _mirror {false}; // flip horizontally
uint64_t _v_last_ts {0}; // us
float _v_interval_avg {0.f}; // s
} view;
std::shared_ptr<LockedFrameStream2<SDLVideoFrame>> stream;
};
std::vector<Writer> _writers;
DebugVideoTapSink(TextureUploaderI& tu) : _tu(tu) {}
~DebugVideoTapSink(void) {}
// sink
std::shared_ptr<FrameStream2I<SDLVideoFrame>> subscribe(void) override {
_writers.emplace_back(Writer{
Writer::View{_id_counter++},
std::make_shared<LockedFrameStream2<SDLVideoFrame>>()
});
return _writers.back().stream;
}
bool unsubscribe(const std::shared_ptr<FrameStream2I<SDLVideoFrame>>& sub) override {
if (!sub || _writers.empty()) {
// nah
return false;
}
for (auto it = _writers.cbegin(); it != _writers.cend(); it++) {
if (it->stream == sub) {
_tu.destroy(it->view._tex);
_writers.erase(it);
return true;
}
}
// what
return false;
}
};
struct DebugVideoTestSource : public FrameStream2SourceI<SDLVideoFrame> {
std::vector<std::shared_ptr<LockedFrameStream2<SDLVideoFrame>>> _readers;
std::atomic_bool _stop {false};
std::thread _thread;
DebugVideoTestSource(void) {
std::cout << "DVTS: starting new test video source\n";
_thread = std::thread([this](void) {
while (!_stop) {
if (!_readers.empty()) {
auto* surf = SDL_CreateSurface(960, 720, SDL_PIXELFORMAT_ARGB32);
// color
static auto start_time = Message::getTimeMS();
const float time = (Message::getTimeMS() - start_time)/1000.f;
SDL_ClearSurface(surf, std::sin(time), std::cos(time), 0.5f, 1.f);
SDLVideoFrame frame{ // non-owning
Message::getTimeMS()*1000,
surf,
};
for (auto& stream : _readers) {
stream->push(frame); // copy
}
SDL_DestroySurface(surf);
}
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
});
}
~DebugVideoTestSource(void) {
_stop = true;
_thread.join();
}
std::shared_ptr<FrameStream2I<SDLVideoFrame>> subscribe(void) override {
return _readers.emplace_back(std::make_shared<LockedFrameStream2<SDLVideoFrame>>());
}
bool unsubscribe(const std::shared_ptr<FrameStream2I<SDLVideoFrame>>& sub) override {
for (auto it = _readers.cbegin(); it != _readers.cend(); it++) {
if (it->get() == sub.get()) {
_readers.erase(it);
return true;
}
}
return false;
}
};
DebugVideoTap::DebugVideoTap(ObjectStore2& os, StreamManager& sm, TextureUploaderI& tu) : _os(os), _sm(sm), _tu(tu) {
// post self as video sink
_tap = {_os.registry(), _os.registry().create()};
try {
auto dvts = std::make_unique<DebugVideoTapSink>(_tu);
_tap.emplace<DebugVideoTapSink*>(dvts.get()); // to get our data back
_tap.emplace<Components::FrameStream2Sink<SDLVideoFrame>>(
std::move(dvts)
);
_tap.emplace<Components::StreamSink>(Components::StreamSink::create<SDLVideoFrame>("DebugVideoTap"));
_tap.emplace<Components::TagDefaultTarget>();
_os.throwEventConstruct(_tap);
} catch (...) {
_os.registry().destroy(_tap);
}
_src = {_os.registry(), _os.registry().create()};
try {
auto dvts = std::make_unique<DebugVideoTestSource>();
_src.emplace<DebugVideoTestSource*>(dvts.get());
_src.emplace<Components::FrameStream2Source<SDLVideoFrame>>(
std::move(dvts)
);
_src.emplace<Components::StreamSource>(Components::StreamSource::create<SDLVideoFrame>("DebugVideoTest"));
_os.throwEventConstruct(_src);
} catch (...) {
_os.registry().destroy(_src);
}
}
DebugVideoTap::~DebugVideoTap(void) {
if (static_cast<bool>(_tap)) {
_os.registry().destroy(_tap);
}
if (static_cast<bool>(_src)) {
_os.registry().destroy(_src);
}
}
float DebugVideoTap::render(void) {
float min_interval {2.f};
auto& dvtsw = _tap.get<DebugVideoTapSink*>()->_writers;
for (auto& [view, stream] : dvtsw) {
std::string window_title {"DebugVideoTap #"};
window_title += std::to_string(view._id);
ImGui::SetNextWindowSize({250, 250}, ImGuiCond_Appearing);
if (ImGui::Begin(window_title.c_str())) {
while (auto new_frame_opt = stream->pop()) {
// timing
if (view._v_last_ts == 0) {
view._v_last_ts = new_frame_opt.value().timestampUS;
} else {
auto delta = int64_t(new_frame_opt.value().timestampUS) - int64_t(view._v_last_ts);
view._v_last_ts = new_frame_opt.value().timestampUS;
if (view._v_interval_avg == 0) {
view._v_interval_avg = delta/1'000'000.f;
} else {
const float r = 0.2f;
view._v_interval_avg = view._v_interval_avg * (1.f-r) + (delta/1'000'000.f) * r;
}
}
SDL_Surface* new_frame_surf = new_frame_opt.value().surface.get();
SDL_Surface* converted_surf = new_frame_surf;
if (new_frame_surf->format != SDL_PIXELFORMAT_RGBA32) {
// we need to convert
//std::cerr << "DVT: need to convert\n";
converted_surf = SDL_ConvertSurfaceAndColorspace(new_frame_surf, SDL_PIXELFORMAT_RGBA32, nullptr, SDL_COLORSPACE_RGB_DEFAULT, 0);
assert(converted_surf->format == SDL_PIXELFORMAT_RGBA32);
}
SDL_LockSurface(converted_surf);
if (view._tex == 0 || (int)view._tex_w != converted_surf->w || (int)view._tex_h != converted_surf->h) {
_tu.destroy(view._tex);
view._tex = _tu.uploadRGBA(
static_cast<const uint8_t*>(converted_surf->pixels),
converted_surf->w,
converted_surf->h,
TextureUploaderI::LINEAR,
TextureUploaderI::STREAMING
);
view._tex_w = converted_surf->w;
view._tex_h = converted_surf->h;
} else {
_tu.updateRGBA(view._tex, static_cast<const uint8_t*>(converted_surf->pixels), converted_surf->w * converted_surf->h * 4);
}
SDL_UnlockSurface(converted_surf);
if (new_frame_surf != converted_surf) {
// clean up temp
SDL_DestroySurface(converted_surf);
}
}
ImGui::Checkbox("mirror", &view._mirror);
// img here
if (view._tex != 0) {
ImGui::SameLine();
ImGui::Text("moving avg interval: %f", view._v_interval_avg);
const float img_w = ImGui::GetContentRegionAvail().x;
ImGui::Image(
reinterpret_cast<ImTextureID>(view._tex),
ImVec2{img_w, img_w * float(view._tex_h)/view._tex_w},
ImVec2{view._mirror?1.f:0.f, 0},
ImVec2{view._mirror?0.f:1.f, 1}
);
}
}
ImGui::End();
}
return min_interval;
}

23
src/debug_video_tap.hpp Normal file
View File

@ -0,0 +1,23 @@
#pragma once
#include <solanaceae/object_store/fwd.hpp>
#include "./frame_streams/stream_manager.hpp"
#include "./texture_uploader.hpp"
// provides a sink and a small window displaying a SDLVideoFrame
// HACK: provides a test video source
class DebugVideoTap {
ObjectStore2& _os;
StreamManager& _sm;
TextureUploaderI& _tu;
ObjectHandle _tap;
ObjectHandle _src;
public:
DebugVideoTap(ObjectStore2& os, StreamManager& sm, TextureUploaderI& tu);
~DebugVideoTap(void);
float render(void);
};

View File

@ -0,0 +1,39 @@
#pragma once
#include "./frame_stream2.hpp"
#include <solanaceae/util/span.hpp>
#include <cstdint>
#include <variant>
#include <vector>
// raw audio
// channels make samples interleaved,
// planar channels are not supported
// s16 only stopgap audio frame (simplified)
struct AudioFrame2 {
// samples per second
uint32_t sample_rate {48'000};
// only >0 is valid
size_t channels {0};
std::variant<
std::vector<int16_t>, // S16, platform endianess
Span<int16_t> // non owning variant, for direct consumption
> buffer;
// helpers
Span<int16_t> getSpan(void) const {
if (std::holds_alternative<std::vector<int16_t>>(buffer)) {
return Span<int16_t>{std::get<std::vector<int16_t>>(buffer)};
} else {
return std::get<Span<int16_t>>(buffer);
}
return {};
}
};
using AudioFrame2Stream2I = FrameStream2I<AudioFrame2>;

View File

@ -0,0 +1,103 @@
#pragma once
#include "./audio_stream2.hpp"
// reframes audio frames to a specified size in ms
// TODO: use absolute sample count instead??
template<typename RealAudioStream>
struct AudioStreamPopReFramer : public FrameStream2I<AudioFrame2> {
uint32_t _frame_length_ms {20};
// gotta be careful of the multithreaded nature
// and false(true) sharing
uint64_t _pad0{};
RealAudioStream _stream;
uint64_t _pad1{};
// dequeue?
std::vector<int16_t> _buffer;
uint32_t _sample_rate {48'000};
size_t _channels {0};
AudioStreamPopReFramer(uint32_t frame_length_ms = 20)
: _frame_length_ms(frame_length_ms) {
}
AudioStreamPopReFramer(uint32_t frame_length_ms, FrameStream2I<AudioFrame2>&& stream)
: _frame_length_ms(frame_length_ms), _stream(std::move(stream)) {
}
~AudioStreamPopReFramer(void) {}
size_t getDesiredSize(void) const {
return _frame_length_ms * _sample_rate * _channels / 1000;
}
int32_t size(void) override { return -1; }
std::optional<AudioFrame2> pop(void) override {
do {
auto new_in = _stream.pop();
if (new_in.has_value()) {
auto& new_value = new_in.value();
// changed
if (_sample_rate != new_value.sample_rate || _channels != new_value.channels) {
//if (_channels != 0) {
// std::cerr << "ReFramer warning: reconfiguring, dropping buffer\n";
//}
_sample_rate = new_value.sample_rate;
_channels = new_value.channels;
// buffer does not exist or config changed and we discard
_buffer = {};
}
//std::cout << "new incoming frame is " << new_value.getSpan().size/new_value.channels*1000/new_value.sample_rate << "ms\n";
auto new_span = new_value.getSpan();
if (_buffer.empty()) {
_buffer = {new_span.cbegin(), new_span.cend()};
} else {
_buffer.insert(_buffer.cend(), new_span.cbegin(), new_span.cend());
}
} else if (_buffer.empty()) {
// first pop might result in invalid state
return std::nullopt;
} else {
// inner stream pop did not give a new value
break; // out of loop
}
} while (_buffer.size() < getDesiredSize());
const auto desired_size = getDesiredSize();
// > threshold?
if (_buffer.size() < desired_size) {
return std::nullopt;
}
// copy data
std::vector<int16_t> return_buffer(_buffer.cbegin(), _buffer.cbegin()+desired_size);
// now crop buffer (meh)
// move data from back to front
_buffer.erase(_buffer.cbegin(), _buffer.cbegin() + desired_size);
return AudioFrame2{
_sample_rate,
_channels,
std::move(return_buffer),
};
}
bool push(const AudioFrame2& value) override {
// might be worth it to instead do the work on push
//assert(false && "push reframing not implemented");
// passthrough
return _stream.push(value);
}
};

View File

@ -0,0 +1,47 @@
#pragma once
#include <cstdint>
#include <memory>
#include <optional>
#include <vector>
// Frames often consist of:
// - seq id // incremental sequential id, gaps in ids can be used to detect loss
// - or timestamp
// - data // the frame data
// eg:
//struct ExampleFrame {
//int64_t seq_id {0};
//std::vector<uint8_t> data;
//};
template<typename FrameType>
struct FrameStream2I {
virtual ~FrameStream2I(void) {}
// get number of available frames
// returns -1 if unknown
[[nodiscard]] virtual int32_t size(void) = 0;
// get next frame
// data sharing? -> no, data is copied for each fsr, if concurency supported
[[nodiscard]] virtual std::optional<FrameType> pop(void) = 0;
// returns true if there are readers (or we dont know)
virtual bool push(const FrameType& value) = 0;
};
template<typename FrameType>
struct FrameStream2SourceI {
virtual ~FrameStream2SourceI(void) {}
[[nodiscard]] virtual std::shared_ptr<FrameStream2I<FrameType>> subscribe(void) = 0;
virtual bool unsubscribe(const std::shared_ptr<FrameStream2I<FrameType>>& sub) = 0;
};
template<typename FrameType>
struct FrameStream2SinkI {
virtual ~FrameStream2SinkI(void) {}
[[nodiscard]] virtual std::shared_ptr<FrameStream2I<FrameType>> subscribe(void) = 0;
virtual bool unsubscribe(const std::shared_ptr<FrameStream2I<FrameType>>& sub) = 0;
};

View File

@ -0,0 +1,46 @@
#pragma once
#include "./frame_stream2.hpp"
#include <mutex>
#include <deque>
// threadsafe queue frame stream
// protected by a simple mutex lock
// prefer lockless queue implementations, when available
template<typename FrameType>
struct LockedFrameStream2 : public FrameStream2I<FrameType> {
std::mutex _lock;
std::deque<FrameType> _frames;
~LockedFrameStream2(void) {}
int32_t size(void) { return -1; }
std::optional<FrameType> pop(void) {
std::lock_guard lg{_lock};
if (_frames.empty()) {
return std::nullopt;
}
FrameType new_frame = std::move(_frames.front());
_frames.pop_front();
return std::move(new_frame);
}
bool push(const FrameType& value) {
std::lock_guard lg{_lock};
if (_frames.size() > 1024) {
return false; // hard limit
}
_frames.push_back(value);
return true;
}
};

View File

@ -0,0 +1,62 @@
#pragma once
#include "./locked_frame_stream.hpp"
#include <cassert>
// implements a stream that pushes to all sub streams
template<typename FrameType, typename SubStreamType = LockedFrameStream2<FrameType>>
struct FrameStream2MultiSource : public FrameStream2SourceI<FrameType>, public FrameStream2I<FrameType> {
using sub_stream_type_t = SubStreamType;
// pointer stability
std::vector<std::shared_ptr<SubStreamType>> _sub_streams;
std::mutex _sub_stream_lock; // accessing the _sub_streams array needs to be exclusive
// a simple lock here is ok, since this tends to be a rare operation,
// except for the push, which is always on the same thread
virtual ~FrameStream2MultiSource(void) {}
// TODO: forward args instead
std::shared_ptr<FrameStream2I<FrameType>> subscribe(void) override {
std::lock_guard lg{_sub_stream_lock};
return _sub_streams.emplace_back(std::make_unique<SubStreamType>());
}
bool unsubscribe(const std::shared_ptr<FrameStream2I<FrameType>>& sub) override {
std::lock_guard lg{_sub_stream_lock};
for (auto it = _sub_streams.begin(); it != _sub_streams.end(); it++) {
if (*it == sub) {
_sub_streams.erase(it);
return true;
}
}
return false; // ?
}
// stream interface
int32_t size(void) override {
// TODO: return something sensible?
return -1;
}
std::optional<FrameType> pop(void) override {
// nope
assert(false && "this logic is very frame type specific, provide an impl");
return std::nullopt;
}
// returns true if there are readers
bool push(const FrameType& value) override {
std::lock_guard lg{_sub_stream_lock};
bool have_readers{false};
for (auto& it : _sub_streams) {
[[maybe_unused]] auto _ = it->push(value);
have_readers = true; // even if queue full, we still continue believing in them
// maybe consider push return value?
}
return have_readers;
}
};

View File

@ -0,0 +1,279 @@
#include "./sdl_audio2_frame_stream2.hpp"
#include <cassert>
#include <iostream>
#include <optional>
#include "../audio_stream_pop_reframer.hpp"
// "thin" wrapper around sdl audio streams
// we dont needs to get fance, as they already provide everything we need
struct SDLAudio2StreamReader : public AudioFrame2Stream2I {
std::unique_ptr<SDL_AudioStream, decltype(&SDL_DestroyAudioStream)> _stream;
uint32_t _sample_rate {48'000};
size_t _channels {0};
// buffer gets reused!
std::vector<int16_t> _buffer;
SDLAudio2StreamReader(void) : _stream(nullptr, nullptr) {}
SDLAudio2StreamReader(SDLAudio2StreamReader&& other) :
_stream(std::move(other._stream)),
_sample_rate(other._sample_rate),
_channels(other._channels)
{
const size_t buffer_size {960*_channels};
_buffer.resize(buffer_size);
}
~SDLAudio2StreamReader(void) {
if (_stream) {
SDL_UnbindAudioStream(_stream.get());
}
}
int32_t size(void) override {
//assert(_stream);
// returns bytes
//SDL_GetAudioStreamAvailable(_stream.get());
return -1;
}
std::optional<AudioFrame2> pop(void) override {
assert(_stream);
if (!_stream) {
return std::nullopt;
}
const size_t buffer_size {960*_channels};
_buffer.resize(buffer_size); // noop?
const auto read_bytes = SDL_GetAudioStreamData(
_stream.get(),
_buffer.data(),
_buffer.size()*sizeof(int16_t)
);
// no new frame yet, or error
if (read_bytes <= 0) {
return std::nullopt;
}
return AudioFrame2{
_sample_rate, _channels,
Span<int16_t>(_buffer.data(), read_bytes/sizeof(int16_t)),
};
}
bool push(const AudioFrame2&) override {
// TODO: make universal sdl stream wrapper (combine with SDLAudioOutputDeviceDefaultInstance)
assert(false && "read only");
return false;
}
};
SDLAudio2InputDevice::SDLAudio2InputDevice(void) : SDLAudio2InputDevice(SDL_AUDIO_DEVICE_DEFAULT_RECORDING) {
}
SDLAudio2InputDevice::SDLAudio2InputDevice(SDL_AudioDeviceID conf_device_id) : _configured_device_id(conf_device_id) {
if (_configured_device_id == 0) {
// TODO: proper error handling
throw int(1);
}
}
SDLAudio2InputDevice::~SDLAudio2InputDevice(void) {
_streams.clear();
if (_virtual_device_id != 0) {
SDL_CloseAudioDevice(_virtual_device_id);
_virtual_device_id = 0;
}
}
std::shared_ptr<FrameStream2I<AudioFrame2>> SDLAudio2InputDevice::subscribe(void) {
if (_virtual_device_id == 0) {
// first stream, open device
// this spec is more like a hint to the hardware
SDL_AudioSpec spec {
SDL_AUDIO_S16,
1, // TODO: conf
48'000,
};
_virtual_device_id = SDL_OpenAudioDevice(_configured_device_id, &spec);
}
if (_virtual_device_id == 0) {
std::cerr << "SDLAID error: failed opening device " << _configured_device_id << "\n";
return nullptr;
}
SDL_AudioSpec spec {
SDL_AUDIO_S16, // required, as AudioFrame2 only supports s16
1, // TODO: conf
48'000,
};
SDL_AudioSpec device_spec {
SDL_AUDIO_S16,
1, // TODO: conf
48'000,
};
// TODO: error check
SDL_GetAudioDeviceFormat(_virtual_device_id, &device_spec, nullptr);
// error check
auto* sdl_stream = SDL_CreateAudioStream(&device_spec, &spec);
// error check
SDL_BindAudioStream(_virtual_device_id, sdl_stream);
//auto new_stream = std::make_shared<SDLAudio2StreamReader>();
//// TODO: move to ctr
//new_stream->_stream = {sdl_stream, &SDL_DestroyAudioStream};
//new_stream->_sample_rate = spec.freq;
//new_stream->_channels = spec.channels;
auto new_stream = std::make_shared<AudioStreamPopReFramer<SDLAudio2StreamReader>>();
new_stream->_stream._stream = {sdl_stream, &SDL_DestroyAudioStream};
new_stream->_stream._sample_rate = spec.freq;
new_stream->_stream._channels = spec.channels;
new_stream->_frame_length_ms = 5; // WHY DOES THIS FIX MY ISSUE !!!
_streams.emplace_back(new_stream);
return new_stream;
}
bool SDLAudio2InputDevice::unsubscribe(const std::shared_ptr<FrameStream2I<AudioFrame2>>& sub) {
for (auto it = _streams.cbegin(); it != _streams.cend(); it++) {
if (*it == sub) {
_streams.erase(it);
if (_streams.empty()) {
// last stream, close
// TODO: make sure no shared ptr still exists???
SDL_CloseAudioDevice(_virtual_device_id);
std::cout << "SDLAID: closing device " << _virtual_device_id << " (" << _configured_device_id << ")\n";
_virtual_device_id = 0;
}
return true;
}
}
return false;
}
// does not need to be visible in the header
struct SDLAudio2OutputDeviceDefaultInstance : public AudioFrame2Stream2I {
std::unique_ptr<SDL_AudioStream, decltype(&SDL_DestroyAudioStream)> _stream;
uint32_t _last_sample_rate {48'000};
size_t _last_channels {0};
// TODO: audio device
SDLAudio2OutputDeviceDefaultInstance(void);
SDLAudio2OutputDeviceDefaultInstance(SDLAudio2OutputDeviceDefaultInstance&& other);
~SDLAudio2OutputDeviceDefaultInstance(void);
int32_t size(void) override;
std::optional<AudioFrame2> pop(void) override;
bool push(const AudioFrame2& value) override;
};
SDLAudio2OutputDeviceDefaultInstance::SDLAudio2OutputDeviceDefaultInstance(void) : _stream(nullptr, nullptr) {
}
SDLAudio2OutputDeviceDefaultInstance::SDLAudio2OutputDeviceDefaultInstance(SDLAudio2OutputDeviceDefaultInstance&& other) : _stream(std::move(other._stream)) {
}
SDLAudio2OutputDeviceDefaultInstance::~SDLAudio2OutputDeviceDefaultInstance(void) {
}
int32_t SDLAudio2OutputDeviceDefaultInstance::size(void) {
return -1;
}
std::optional<AudioFrame2> SDLAudio2OutputDeviceDefaultInstance::pop(void) {
assert(false);
// this is an output device, there is no data to pop
return std::nullopt;
}
bool SDLAudio2OutputDeviceDefaultInstance::push(const AudioFrame2& value) {
if (!static_cast<bool>(_stream)) {
return false;
}
// verify here the fame has the same channel count and sample freq
// if something changed, we need to either use a temporary stream, just for conversion, or reopen _stream with the new params
// because of data temporality, the second options looks like a better candidate
if (
value.sample_rate != _last_sample_rate ||
value.channels != _last_channels
) {
const SDL_AudioSpec spec = {
static_cast<SDL_AudioFormat>(SDL_AUDIO_S16),
static_cast<int>(value.channels),
static_cast<int>(value.sample_rate)
};
SDL_SetAudioStreamFormat(_stream.get(), &spec, nullptr);
std::cerr << "SDLAOD: audio format changed\n";
}
auto data = value.getSpan();
if (data.size == 0) {
std::cerr << "empty audio frame??\n";
}
if (!SDL_PutAudioStreamData(_stream.get(), data.ptr, data.size * sizeof(int16_t))) {
std::cerr << "put data error\n";
return false; // return true?
}
_last_sample_rate = value.sample_rate;
_last_channels = value.channels;
return true;
}
SDLAudio2OutputDeviceDefaultSink::~SDLAudio2OutputDeviceDefaultSink(void) {
// TODO: pause and close device?
}
std::shared_ptr<FrameStream2I<AudioFrame2>> SDLAudio2OutputDeviceDefaultSink::subscribe(void) {
auto new_instance = std::make_shared<SDLAudio2OutputDeviceDefaultInstance>();
constexpr SDL_AudioSpec spec = { SDL_AUDIO_S16, 2, 48000 };
new_instance->_stream = {
SDL_OpenAudioDeviceStream(SDL_AUDIO_DEVICE_DEFAULT_PLAYBACK, &spec, nullptr, nullptr),
&SDL_DestroyAudioStream
};
new_instance->_last_sample_rate = spec.freq;
new_instance->_last_channels = spec.channels;
if (!static_cast<bool>(new_instance->_stream)) {
std::cerr << "SDL open audio device failed!\n";
return nullptr;
}
const auto audio_device_id = SDL_GetAudioStreamDevice(new_instance->_stream.get());
SDL_ResumeAudioDevice(audio_device_id);
return new_instance;
}
bool SDLAudio2OutputDeviceDefaultSink::unsubscribe(const std::shared_ptr<FrameStream2I<AudioFrame2>>& sub) {
// TODO: i think we should keep track of them
if (!sub) {
return false;
}
return true;
}

View File

@ -0,0 +1,43 @@
#pragma once
#include "../frame_stream2.hpp"
#include "../audio_stream2.hpp"
#include <SDL3/SDL.h>
#include <cstdint>
#include <vector>
// we dont have to multicast ourself, because sdl streams and virtual devices already do this
// source
// opens device
// creates a sdl audio stream for each subscribed reader stream
struct SDLAudio2InputDevice : public FrameStream2SourceI<AudioFrame2> {
// held by instances
using sdl_stream_type = std::unique_ptr<SDL_AudioStream, decltype(&SDL_DestroyAudioStream)>;
SDL_AudioDeviceID _configured_device_id {0};
SDL_AudioDeviceID _virtual_device_id {0};
std::vector<std::shared_ptr<FrameStream2I<AudioFrame2>>> _streams;
SDLAudio2InputDevice(void);
SDLAudio2InputDevice(SDL_AudioDeviceID conf_device_id);
~SDLAudio2InputDevice(void);
std::shared_ptr<FrameStream2I<AudioFrame2>> subscribe(void) override;
bool unsubscribe(const std::shared_ptr<FrameStream2I<AudioFrame2>>& sub) override;
};
// sink
// constructs entirely new streams, since sdl handles sync and mixing for us (or should)
struct SDLAudio2OutputDeviceDefaultSink : public FrameStream2SinkI<AudioFrame2> {
// TODO: pause device?
~SDLAudio2OutputDeviceDefaultSink(void);
std::shared_ptr<FrameStream2I<AudioFrame2>> subscribe(void) override;
bool unsubscribe(const std::shared_ptr<FrameStream2I<AudioFrame2>>& sub) override;
};

View File

@ -0,0 +1,41 @@
#pragma once
#include <SDL3/SDL.h>
#include <cstdint>
#include <memory>
// https://youtu.be/71Iw4Q74OaE
inline void nopSurfaceDestructor(SDL_Surface*) {}
// this is very sdl specific
// but allows us to autoconvert between formats (to a degree)
struct SDLVideoFrame {
// micro seconds (nano is way too much)
uint64_t timestampUS {0};
std::unique_ptr<SDL_Surface, decltype(&SDL_DestroySurface)> surface {nullptr, &SDL_DestroySurface};
// special non-owning constructor
SDLVideoFrame(
uint64_t ts,
SDL_Surface* surf
) {
timestampUS = ts;
surface = {surf, &nopSurfaceDestructor};
}
SDLVideoFrame(SDLVideoFrame&& other) = default;
// copy
SDLVideoFrame(const SDLVideoFrame& other) {
timestampUS = other.timestampUS;
if (static_cast<bool>(other.surface)) {
surface = {
SDL_DuplicateSurface(other.surface.get()),
&SDL_DestroySurface
};
}
}
SDLVideoFrame& operator=(const SDLVideoFrame& other) = delete;
};

View File

@ -0,0 +1,206 @@
#include "./stream_manager.hpp"
StreamManager::Connection::Connection(
ObjectHandle src_,
ObjectHandle sink_,
std::unique_ptr<Data>&& data_,
std::function<void(Connection&)>&& pump_fn_,
std::function<void(Connection&)>&& unsubscribe_fn_,
bool on_main_thread_
) :
src(src_),
sink(sink_),
data(std::move(data_)),
pump_fn(std::move(pump_fn_)),
unsubscribe_fn(std::move(unsubscribe_fn_)),
on_main_thread(on_main_thread_)
{
if (!on_main_thread) {
// start thread
pump_thread = std::thread([this](void) {
while (!stop) {
pump_fn(*this);
std::this_thread::sleep_for(std::chrono::milliseconds(5));
}
finished = true;
});
}
}
StreamManager::StreamManager(ObjectStore2& os) : _os(os) {
_os.subscribe(this, ObjectStore_Event::object_construct);
//_os.subscribe(this, ObjectStore_Event::object_update);
_os.subscribe(this, ObjectStore_Event::object_destroy);
}
StreamManager::~StreamManager(void) {
// stop all connetions
for (const auto& con : _connections) {
con->stop = true;
if (!con->on_main_thread) {
con->pump_thread.join(); // we skip the finished check and wait
}
con->unsubscribe_fn(*con);
}
}
bool StreamManager::connect(Object src, Object sink, bool threaded) {
auto h_src = _os.objectHandle(src);
auto h_sink = _os.objectHandle(sink);
if (!static_cast<bool>(h_src) || !static_cast<bool>(h_sink)) {
// an object does not exist
return false;
}
// get src and sink comps
if (!h_src.all_of<Components::StreamSource>()) {
// src not stream source
return false;
}
if (!h_sink.all_of<Components::StreamSink>()) {
// sink not stream sink
return false;
}
const auto& ssrc = h_src.get<Components::StreamSource>();
const auto& ssink = h_sink.get<Components::StreamSink>();
// compare type
if (ssrc.frame_type_name != ssink.frame_type_name) {
return false;
}
// always fail in debug mode
assert(static_cast<bool>(ssrc.connect_fn));
if (!static_cast<bool>(ssrc.connect_fn)) {
return false;
}
// use connect fn from src
return ssrc.connect_fn(*this, src, sink, threaded);
}
bool StreamManager::disconnect(Object src, Object sink) {
auto res = std::find_if(
_connections.cbegin(), _connections.cend(),
[&](const auto& a) { return a->src == src && a->sink == sink; }
);
if (res == _connections.cend()) {
// not found
return false;
}
// do disconnect
(*res)->stop = true;
return true;
}
bool StreamManager::disconnectAll(Object o) {
bool succ {false};
for (const auto& con : _connections) {
if (con->src == o || con->sink == o) {
con->stop = true;
succ = true;
}
}
return succ;
}
// do we need the time delta?
float StreamManager::tick(float) {
// pump all mainthread connections
for (auto it = _connections.begin(); it != _connections.end();) {
auto& con = **it;
if (!static_cast<bool>(con.src) || !static_cast<bool>(con.sink)) {
// either side disappeard without disconnectAll
// TODO: warn/error log
con.stop = true;
}
if (con.on_main_thread) {
con.pump_fn(con);
}
if (con.stop && (con.finished || con.on_main_thread)) {
if (!con.on_main_thread) {
assert(con.pump_thread.joinable());
con.pump_thread.join();
}
con.unsubscribe_fn(con);
it = _connections.erase(it);
} else {
it++;
}
}
// return min over intervals instead
return 2.f; // TODO: 2sec makes mainthread connections unusable
}
bool StreamManager::onEvent(const ObjectStore::Events::ObjectConstruct& e) {
if (!e.e.any_of<Components::StreamSink, Components::StreamSource>()) {
return false;
}
// update default targets
if (e.e.all_of<Components::TagDefaultTarget>()) {
if (e.e.all_of<Components::StreamSource>()) {
_default_sources[e.e.get<Components::StreamSource>().frame_type_name] = e.e;
} else { // sink
_default_sinks[e.e.get<Components::StreamSink>().frame_type_name] = e.e;
}
}
// connect to default
// only ever do this on new objects
if (e.e.all_of<Components::TagConnectToDefault>()) {
if (e.e.all_of<Components::StreamSource>()) {
auto it_d_sink = _default_sinks.find(e.e.get<Components::StreamSource>().frame_type_name);
if (it_d_sink != _default_sinks.cend()) {
// TODO: threaded
connect(e.e, it_d_sink->second);
}
} else { // sink
auto it_d_src = _default_sources.find(e.e.get<Components::StreamSink>().frame_type_name);
if (it_d_src != _default_sources.cend()) {
// TODO: threaded
connect(it_d_src->second, e.e);
}
}
}
return false;
}
bool StreamManager::onEvent(const ObjectStore::Events::ObjectUpdate&) {
// what do we do here?
return false;
}
bool StreamManager::onEvent(const ObjectStore::Events::ObjectDestory& e) {
// typeless
for (auto it = _default_sources.cbegin(); it != _default_sources.cend();) {
if (it->second == e.e) {
it = _default_sources.erase(it);
} else {
it++;
}
}
for (auto it = _default_sinks.cbegin(); it != _default_sinks.cend();) {
if (it->second == e.e) {
it = _default_sinks.erase(it);
} else {
it++;
}
}
// TODO: destroy connections
// TODO: auto reconnect default following devices if another default exists
return false;
}

View File

@ -0,0 +1,222 @@
#pragma once
#include <solanaceae/object_store/fwd.hpp>
#include <solanaceae/object_store/object_store.hpp>
#include <entt/core/type_info.hpp>
#include "./frame_stream2.hpp"
#include <unordered_map>
#include <vector>
#include <memory>
#include <algorithm>
#include <thread>
#include <chrono>
#include <atomic>
// fwd
class StreamManager;
namespace Components {
// mark a source or sink as the(a) default
struct TagDefaultTarget {};
// mark a source/sink as to be connected to a default sink/source
// of the same type
struct TagConnectToDefault {};
struct StreamSource {
std::string name;
std::string frame_type_name;
std::function<bool(StreamManager&, Object, Object, bool)> connect_fn;
template<typename FrameType>
static StreamSource create(const std::string& name);
};
struct StreamSink {
std::string name;
std::string frame_type_name;
template<typename FrameType>
static StreamSink create(const std::string& name);
};
template<typename FrameType>
using FrameStream2Source = std::unique_ptr<FrameStream2SourceI<FrameType>>;
template<typename FrameType>
using FrameStream2Sink = std::unique_ptr<FrameStream2SinkI<FrameType>>;
} // Components
class StreamManager : protected ObjectStoreEventI {
friend class StreamManagerUI; // TODO: make this go away
ObjectStore2& _os;
struct Connection {
ObjectHandle src;
ObjectHandle sink;
struct Data {
virtual ~Data(void) {}
};
std::unique_ptr<Data> data; // stores reader writer type erased
std::function<void(Connection&)> pump_fn; // TODO: make it return next interval?
std::function<void(Connection&)> unsubscribe_fn;
bool on_main_thread {true};
std::atomic_bool stop {false}; // disconnect
std::atomic_bool finished {false}; // disconnect
// pump thread
std::thread pump_thread;
// frame interval counters and estimates
// TODO
Connection(void) = default;
Connection(
ObjectHandle src_,
ObjectHandle sink_,
std::unique_ptr<Data>&& data_,
std::function<void(Connection&)>&& pump_fn_,
std::function<void(Connection&)>&& unsubscribe_fn_,
bool on_main_thread_ = true
);
};
std::vector<std::unique_ptr<Connection>> _connections;
std::unordered_map<std::string, Object> _default_sources;
std::unordered_map<std::string, Object> _default_sinks;
public:
StreamManager(ObjectStore2& os);
virtual ~StreamManager(void);
template<typename FrameType>
bool connect(Object src, Object sink, bool threaded = true);
bool connect(Object src, Object sink, bool threaded = true);
bool disconnect(Object src, Object sink);
bool disconnectAll(Object o);
// do we need the time delta?
float tick(float);
protected:
bool onEvent(const ObjectStore::Events::ObjectConstruct&) override;
bool onEvent(const ObjectStore::Events::ObjectUpdate&) override;
bool onEvent(const ObjectStore::Events::ObjectDestory&) override;
};
// template impls
namespace Components {
// we require the complete sm type here
template<typename FrameType>
StreamSource StreamSource::create(const std::string& name) {
return StreamSource{
name,
std::string{entt::type_name<FrameType>::value()},
+[](StreamManager& sm, Object src, Object sink, bool threaded) {
return sm.connect<FrameType>(src, sink, threaded);
},
};
}
template<typename FrameType>
StreamSink StreamSink::create(const std::string& name) {
return StreamSink{
name,
std::string{entt::type_name<FrameType>::value()},
};
}
} // Components
template<typename FrameType>
bool StreamManager::connect(Object src, Object sink, bool threaded) {
auto res = std::find_if(
_connections.cbegin(), _connections.cend(),
[&](const auto& a) { return a->src == src && a->sink == sink; }
);
if (res != _connections.cend()) {
// already exists
return false;
}
auto h_src = _os.objectHandle(src);
auto h_sink = _os.objectHandle(sink);
if (!static_cast<bool>(h_src) || !static_cast<bool>(h_sink)) {
// an object does not exist
return false;
}
if (!h_src.all_of<Components::FrameStream2Source<FrameType>>()) {
// src not stream source
return false;
}
if (!h_sink.all_of<Components::FrameStream2Sink<FrameType>>()) {
// sink not stream sink
return false;
}
auto& src_stream = h_src.get<Components::FrameStream2Source<FrameType>>();
auto& sink_stream = h_sink.get<Components::FrameStream2Sink<FrameType>>();
struct inlineData : public Connection::Data {
virtual ~inlineData(void) {}
std::shared_ptr<FrameStream2I<FrameType>> reader;
std::shared_ptr<FrameStream2I<FrameType>> writer;
};
auto our_data = std::make_unique<inlineData>();
our_data->reader = src_stream->subscribe();
if (!our_data->reader) {
return false;
}
our_data->writer = sink_stream->subscribe();
if (!our_data->writer) {
return false;
}
_connections.push_back(std::make_unique<Connection>(
h_src,
h_sink,
std::move(our_data),
[](Connection& con) -> void {
// there might be more stored
for (size_t i = 0; i < 10; i++) {
auto new_frame_opt = static_cast<inlineData*>(con.data.get())->reader->pop();
// TODO: frame interval estimates
if (new_frame_opt.has_value()) {
static_cast<inlineData*>(con.data.get())->writer->push(new_frame_opt.value());
} else {
break;
}
}
},
[](Connection& con) -> void {
auto* src_stream_ptr = con.src.try_get<Components::FrameStream2Source<FrameType>>();
if (src_stream_ptr != nullptr) {
(*src_stream_ptr)->unsubscribe(static_cast<inlineData*>(con.data.get())->reader);
}
auto* sink_stream_ptr = con.sink.try_get<Components::FrameStream2Sink<FrameType>>();
if (sink_stream_ptr != nullptr) {
(*sink_stream_ptr)->unsubscribe(static_cast<inlineData*>(con.data.get())->writer);
}
},
!threaded
));
return true;
}

View File

@ -0,0 +1,155 @@
#include <iostream>
#include "./audio_stream_pop_reframer.hpp"
#include "./locked_frame_stream.hpp"
#include <cassert>
int main(void) {
{ // pump perfect
AudioStreamPopReFramer<LockedFrameStream2<AudioFrame2>> stream;
stream._frame_length_ms = 10;
AudioFrame2 f1 {
48'000,
1,
{},
};
f1.buffer = std::vector<int16_t>(
// perfect size
stream._frame_length_ms * f1.sample_rate * f1.channels / 1000,
0
);
{ // fill with sequential value
int16_t seq = 0;
for (auto& v : std::get<std::vector<int16_t>>(f1.buffer)) {
v = seq++;
}
}
stream.push(f1);
auto ret_opt = stream.pop();
assert(ret_opt);
auto& ret = ret_opt.value();
assert(ret.sample_rate == f1.sample_rate);
assert(ret.channels == f1.channels);
assert(ret.getSpan().size == f1.getSpan().size);
{
int16_t seq = 0;
for (const auto v : ret.getSpan()) {
assert(v == seq++);
}
}
}
{ // pump half
AudioStreamPopReFramer<LockedFrameStream2<AudioFrame2>> stream;
stream._frame_length_ms = 10;
AudioFrame2 f1 {
48'000,
1,
{},
};
f1.buffer = std::vector<int16_t>(
// perfect size
(stream._frame_length_ms * f1.sample_rate * f1.channels / 1000) / 2,
0
);
AudioFrame2 f2 {
48'000,
1,
{},
};
f2.buffer = std::vector<int16_t>(
// perfect size
(stream._frame_length_ms * f1.sample_rate * f1.channels / 1000) / 2,
0
);
{ // fill with sequential value
int16_t seq = 0;
for (auto& v : std::get<std::vector<int16_t>>(f1.buffer)) {
v = seq++;
}
for (auto& v : std::get<std::vector<int16_t>>(f2.buffer)) {
v = seq++;
}
}
stream.push(f1);
stream.push(f2);
// supposed to combine both
auto ret_opt = stream.pop();
assert(ret_opt);
auto& ret = ret_opt.value();
assert(ret.sample_rate == f1.sample_rate);
assert(ret.channels == f1.channels);
assert(ret.getSpan().size == stream._frame_length_ms * f1.sample_rate * f1.channels / 1000);
{
int16_t seq = 0;
for (const auto v : ret.getSpan()) {
assert(v == seq++);
}
}
}
{ // pump double
AudioStreamPopReFramer<LockedFrameStream2<AudioFrame2>> stream;
stream._frame_length_ms = 20;
AudioFrame2 f1 {
48'000,
2,
{},
};
f1.buffer = std::vector<int16_t>(
// perfect size
(stream._frame_length_ms * f1.sample_rate * f1.channels / 1000) * 2,
0
);
{ // fill with sequential value
int16_t seq = 0;
for (auto& v : std::get<std::vector<int16_t>>(f1.buffer)) {
v = seq++;
}
}
stream.push(f1);
// pop 2x
int16_t seq = 0;
{
auto ret_opt = stream.pop();
assert(ret_opt);
auto& ret = ret_opt.value();
assert(ret.sample_rate == f1.sample_rate);
assert(ret.channels == f1.channels);
assert(ret.getSpan().size == stream._frame_length_ms * f1.sample_rate * f1.channels / 1000);
for (const auto v : ret.getSpan()) {
assert(v == seq++);
}
}
{
auto ret_opt = stream.pop();
assert(ret_opt);
auto& ret = ret_opt.value();
assert(ret.sample_rate == f1.sample_rate);
assert(ret.channels == f1.channels);
assert(ret.getSpan().size == stream._frame_length_ms * f1.sample_rate * f1.channels / 1000);
for (const auto v : ret.getSpan()) {
assert(v == seq++);
}
}
}
return 0;
}

View File

@ -0,0 +1,77 @@
#pragma once
#include <solanaceae/contact/contact_model3.hpp>
#include <solanaceae/object_store/fwd.hpp>
struct VoIPModelI;
namespace Components::VoIP {
struct TagVoIPSession {};
// getting called or invited by
struct Incoming {
Contact3 c{entt::null};
};
struct DefaultConfig {
bool incoming_audio {true};
bool incoming_video {true};
bool outgoing_audio {true};
bool outgoing_video {true};
};
// to talk to the model handling this session
//struct VoIPModel {
//VoIPModelI* ptr {nullptr};
//};
struct SessionState {
// ????
// incoming
// outgoing
enum class State {
RINGING,
CONNECTED,
} state;
};
struct SessionContact {
Contact3 c{entt::null};
};
struct StreamSources {
// list of all stream sources originating from this VoIP session
std::vector<Object> streams;
};
struct StreamSinks {
// list of all stream sinks going to this VoIP session
std::vector<Object> streams;
};
} // Components::VoIP
// TODO: events? piggyback on objects?
// stream model instead?? -> more generic than "just" audio and video?
// or specialized like this
// streams abstract type in a nice way
struct VoIPModelI {
virtual ~VoIPModelI(void) {}
// enters a call/voicechat/videocall ???
// - contact
// - default stream sources/sinks ?
// - enable a/v ? -> done on connecting to sources
// returns object tieing together the VoIP session
virtual ObjectHandle enter(const Contact3 c, const Components::VoIP::DefaultConfig& defaults = {true, true, true, true}) { (void)c,(void)defaults; return {}; }
// accept/join an invite to a session
virtual bool accept(ObjectHandle session, const Components::VoIP::DefaultConfig& defaults = {true, true, true, true}) { (void)session,(void)defaults; return false; }
// leaves a call
// - VoIP session object
virtual bool leave(ObjectHandle session) { (void)session; return false; }
};

View File

@ -28,6 +28,8 @@ int main(int argc, char** argv) {
runSysCheck(); runSysCheck();
SDL_SetAppMetadata("tomato", "0.0.0-wip", nullptr);
#ifdef __ANDROID__ #ifdef __ANDROID__
// change current working dir to internal storage // change current working dir to internal storage
std::filesystem::current_path(SDL_GetAndroidInternalStoragePath()); std::filesystem::current_path(SDL_GetAndroidInternalStoragePath());
@ -35,7 +37,7 @@ int main(int argc, char** argv) {
// setup hints // setup hints
#ifndef __ANDROID__ #ifndef __ANDROID__
if (SDL_SetHint(SDL_HINT_VIDEO_ALLOW_SCREENSAVER, "1") != SDL_TRUE) { if (!SDL_SetHint(SDL_HINT_VIDEO_ALLOW_SCREENSAVER, "1")) {
std::cerr << "Failed to set '" << SDL_HINT_VIDEO_ALLOW_SCREENSAVER << "' to 1\n"; std::cerr << "Failed to set '" << SDL_HINT_VIDEO_ALLOW_SCREENSAVER << "' to 1\n";
} }
#endif #endif

View File

@ -5,6 +5,8 @@
#include <solanaceae/contact/components.hpp> #include <solanaceae/contact/components.hpp>
#include "./frame_streams/sdl/sdl_audio2_frame_stream2.hpp"
#include <imgui/imgui.h> #include <imgui/imgui.h>
#include <SDL3/SDL.h> #include <SDL3/SDL.h>
@ -19,16 +21,18 @@ MainScreen::MainScreen(SimpleConfigModel&& conf_, SDL_Renderer* renderer_, Theme
rmm(cr), rmm(cr),
msnj{cr, {}, {}}, msnj{cr, {}, {}},
mts(rmm), mts(rmm),
sm(os),
tc(save_path, save_password), tc(save_path, save_password),
tpi(tc.getTox()), tpi(tc.getTox()),
ad(tc), ad(tc),
#if TOMATO_TOX_AV
tav(tc.getTox()),
#endif
tcm(cr, tc, tc), tcm(cr, tc, tc),
tmm(rmm, cr, tcm, tc, tc), tmm(rmm, cr, tcm, tc, tc),
ttm(rmm, cr, tcm, tc, tc, os), ttm(rmm, cr, tcm, tc, tc, os),
tffom(cr, rmm, tcm, tc, tc), tffom(cr, rmm, tcm, tc, tc),
#if TOMATO_TOX_AV
tav(tc.getTox()),
tavvoip(os, tav, cr, tcm),
#endif
theme(theme_), theme(theme_),
mmil(rmm), mmil(rmm),
tam(/*rmm, */ os, cr, conf), tam(/*rmm, */ os, cr, conf),
@ -41,7 +45,9 @@ MainScreen::MainScreen(SimpleConfigModel&& conf_, SDL_Renderer* renderer_, Theme
sw(conf), sw(conf),
osui(os), osui(os),
tuiu(tc, conf), tuiu(tc, conf),
tdch(tpi) tdch(tpi),
smui(os, sm),
dvt(os, sm, sdlrtu)
{ {
tel.subscribeAll(tc); tel.subscribeAll(tc);
@ -75,7 +81,7 @@ MainScreen::MainScreen(SimpleConfigModel&& conf_, SDL_Renderer* renderer_, Theme
g_provideInstance<ToxPrivateI>("ToxPrivateI", "host", &tpi); g_provideInstance<ToxPrivateI>("ToxPrivateI", "host", &tpi);
g_provideInstance<ToxEventProviderI>("ToxEventProviderI", "host", &tc); g_provideInstance<ToxEventProviderI>("ToxEventProviderI", "host", &tc);
#if TOMATO_TOX_AV #if TOMATO_TOX_AV
g_provideInstance<ToxAV>("ToxAV", "host", &tav); g_provideInstance<ToxAVI>("ToxAVI", "host", &tav);
#endif #endif
g_provideInstance<ToxContactModel2>("ToxContactModel2", "host", &tcm); g_provideInstance<ToxContactModel2>("ToxContactModel2", "host", &tcm);
@ -136,9 +142,46 @@ MainScreen::MainScreen(SimpleConfigModel&& conf_, SDL_Renderer* renderer_, Theme
} }
conf.dump(); conf.dump();
if (SDL_InitSubSystem(SDL_INIT_AUDIO)) {
// add system audio devices
{ // audio in
ObjectHandle asrc {os.registry(), os.registry().create()};
try {
asrc.emplace<Components::FrameStream2Source<AudioFrame2>>(
std::make_unique<SDLAudio2InputDevice>()
);
asrc.emplace<Components::StreamSource>(Components::StreamSource::create<AudioFrame2>("SDL Audio Default Recording Device"));
asrc.emplace<Components::TagDefaultTarget>();
os.throwEventConstruct(asrc);
} catch (...) {
os.registry().destroy(asrc);
}
}
{ // audio out
ObjectHandle asink {os.registry(), os.registry().create()};
try {
asink.emplace<Components::FrameStream2Sink<AudioFrame2>>(
std::make_unique<SDLAudio2OutputDeviceDefaultSink>()
);
asink.emplace<Components::StreamSink>(Components::StreamSink::create<AudioFrame2>("SDL Audio Default Playback Device"));
asink.emplace<Components::TagDefaultTarget>();
os.throwEventConstruct(asink);
} catch (...) {
os.registry().destroy(asink);
}
}
} else {
std::cerr << "MS warning: no sdl audio: " << SDL_GetError() << "\n";
}
} }
MainScreen::~MainScreen(void) { MainScreen::~MainScreen(void) {
// TODO: quit sdl audio
} }
bool MainScreen::handleEvent(SDL_Event& e) { bool MainScreen::handleEvent(SDL_Event& e) {
@ -260,6 +303,8 @@ Screen* MainScreen::render(float time_delta, bool&) {
osui.render(); osui.render();
tuiu.render(); // render tuiu.render(); // render
tdch.render(); // render tdch.render(); // render
smui.render();
const float dvt_interval = dvt.render();
{ // main window menubar injection { // main window menubar injection
if (ImGui::Begin("tomato")) { if (ImGui::Begin("tomato")) {
@ -442,6 +487,7 @@ Screen* MainScreen::render(float time_delta, bool&) {
if (!_window_hidden && _time_since_event < curr_profile.low_delay_window) { if (!_window_hidden && _time_since_event < curr_profile.low_delay_window) {
_render_interval = std::min<float>(_render_interval, ctc_interval); _render_interval = std::min<float>(_render_interval, ctc_interval);
_render_interval = std::min<float>(_render_interval, msgtc_interval); _render_interval = std::min<float>(_render_interval, msgtc_interval);
_render_interval = std::min<float>(_render_interval, dvt_interval);
_render_interval = std::clamp( _render_interval = std::clamp(
_render_interval, _render_interval,
@ -452,6 +498,7 @@ Screen* MainScreen::render(float time_delta, bool&) {
} else if (!_window_hidden && _time_since_event < curr_profile.mid_delay_window) { } else if (!_window_hidden && _time_since_event < curr_profile.mid_delay_window) {
_render_interval = std::min<float>(_render_interval, ctc_interval); _render_interval = std::min<float>(_render_interval, ctc_interval);
_render_interval = std::min<float>(_render_interval, msgtc_interval); _render_interval = std::min<float>(_render_interval, msgtc_interval);
_render_interval = std::min<float>(_render_interval, dvt_interval);
_render_interval = std::clamp( _render_interval = std::clamp(
_render_interval, _render_interval,
@ -474,8 +521,20 @@ Screen* MainScreen::render(float time_delta, bool&) {
} }
Screen* MainScreen::tick(float time_delta, bool& quit) { Screen* MainScreen::tick(float time_delta, bool& quit) {
const float sm_interval = sm.tick(time_delta);
quit = !tc.iterate(time_delta); // compute quit = !tc.iterate(time_delta); // compute
#if TOMATO_TOX_AV
tav.toxavIterate();
// breaks it
// HACK: pow by 1.18 to increase 200 -> ~500
//const float av_interval = std::pow(tav.toxavIterationInterval(), 1.18)/1000.f;
const float av_interval = tav.toxavIterationInterval()/1000.f;
tavvoip.tick();
#endif
tcm.iterate(time_delta); // compute tcm.iterate(time_delta); // compute
const float fo_interval = tffom.tick(time_delta); const float fo_interval = tffom.tick(time_delta);
@ -505,11 +564,22 @@ Screen* MainScreen::tick(float time_delta, bool& quit) {
std::pow(tc.toxIterationInterval(), 1.6f)/1000.f, std::pow(tc.toxIterationInterval(), 1.6f)/1000.f,
pm_interval pm_interval
); );
_min_tick_interval = std::min<float>(
_min_tick_interval,
sm_interval
);
_min_tick_interval = std::min<float>( _min_tick_interval = std::min<float>(
_min_tick_interval, _min_tick_interval,
fo_interval fo_interval
); );
#if TOMATO_TOX_AV
_min_tick_interval = std::min<float>(
_min_tick_interval,
av_interval
);
#endif
//std::cout << "MS: min tick interval: " << _min_tick_interval << "\n"; //std::cout << "MS: min tick interval: " << _min_tick_interval << "\n";
switch (_compute_perf_mode) { switch (_compute_perf_mode) {

View File

@ -11,6 +11,7 @@
#include <solanaceae/plugin/plugin_manager.hpp> #include <solanaceae/plugin/plugin_manager.hpp>
#include <solanaceae/toxcore/tox_event_logger.hpp> #include <solanaceae/toxcore/tox_event_logger.hpp>
#include "./tox_private_impl.hpp" #include "./tox_private_impl.hpp"
#include "./frame_streams/stream_manager.hpp"
#include <solanaceae/tox_contacts/tox_contact_model2.hpp> #include <solanaceae/tox_contacts/tox_contact_model2.hpp>
#include <solanaceae/tox_messages/tox_message_manager.hpp> #include <solanaceae/tox_messages/tox_message_manager.hpp>
@ -33,9 +34,12 @@
#include "./tox_ui_utils.hpp" #include "./tox_ui_utils.hpp"
#include "./tox_dht_cap_histo.hpp" #include "./tox_dht_cap_histo.hpp"
#include "./tox_friend_faux_offline_messaging.hpp" #include "./tox_friend_faux_offline_messaging.hpp"
#include "./stream_manager_ui.hpp"
#include "./debug_video_tap.hpp"
#if TOMATO_TOX_AV #if TOMATO_TOX_AV
#include "./tox_av.hpp" #include "./tox_av.hpp"
#include "./tox_av_voip_model.hpp"
#endif #endif
#include <string> #include <string>
@ -58,17 +62,20 @@ struct MainScreen final : public Screen {
MessageSerializerNJ msnj; MessageSerializerNJ msnj;
MessageTimeSort mts; MessageTimeSort mts;
StreamManager sm;
ToxEventLogger tel{std::cout}; ToxEventLogger tel{std::cout};
ToxClient tc; ToxClient tc;
ToxPrivateImpl tpi; ToxPrivateImpl tpi;
AutoDirty ad; AutoDirty ad;
#if TOMATO_TOX_AV
ToxAV tav;
#endif
ToxContactModel2 tcm; ToxContactModel2 tcm;
ToxMessageManager tmm; ToxMessageManager tmm;
ToxTransferManager ttm; ToxTransferManager ttm;
ToxFriendFauxOfflineMessaging tffom; ToxFriendFauxOfflineMessaging tffom;
#if TOMATO_TOX_AV
ToxAVI tav;
ToxAVVoIPModel tavvoip;
#endif
Theme& theme; Theme& theme;
@ -88,6 +95,8 @@ struct MainScreen final : public Screen {
ObjectStoreUI osui; ObjectStoreUI osui;
ToxUIUtils tuiu; ToxUIUtils tuiu;
ToxDHTCapHisto tdch; ToxDHTCapHisto tdch;
StreamManagerUI smui;
DebugVideoTap dvt;
PluginManager pm; // last, so it gets destroyed first PluginManager pm; // last, so it gets destroyed first

View File

@ -32,7 +32,7 @@ uint64_t SDLRendererTextureUploader::uploadRGBA(const uint8_t* data, uint32_t wi
SDL_UpdateTexture(tex, nullptr, surf->pixels, surf->pitch); SDL_UpdateTexture(tex, nullptr, surf->pixels, surf->pitch);
SDL_BlendMode surf_blend_mode = SDL_BLENDMODE_NONE; SDL_BlendMode surf_blend_mode = SDL_BLENDMODE_NONE;
if (SDL_GetSurfaceBlendMode(surf, &surf_blend_mode) == 0) { if (SDL_GetSurfaceBlendMode(surf, &surf_blend_mode)) {
SDL_SetTextureBlendMode(tex, surf_blend_mode); SDL_SetTextureBlendMode(tex, surf_blend_mode);
} }

234
src/stream_manager_ui.cpp Normal file
View File

@ -0,0 +1,234 @@
#include "./stream_manager_ui.hpp"
#include <solanaceae/object_store/object_store.hpp>
#include <imgui/imgui.h>
#include <string>
StreamManagerUI::StreamManagerUI(ObjectStore2& os, StreamManager& sm) : _os(os), _sm(sm) {
}
void StreamManagerUI::render(void) {
{ // main window menubar injection
// assumes the window "tomato" was rendered already by cg
if (ImGui::Begin("tomato")) {
if (ImGui::BeginMenuBar()) {
// TODO: drop all menu sep?
//ImGui::Separator(); // os already exists (very hacky)
if (ImGui::BeginMenu("ObjectStore")) {
if (ImGui::MenuItem("Stream Manger", nullptr, _show_window)) {
_show_window = !_show_window;
}
ImGui::EndMenu();
}
ImGui::EndMenuBar();
}
}
ImGui::End();
}
if (!_show_window) {
return;
}
if (ImGui::Begin("StreamManagerUI", &_show_window)) {
// TODO: node canvas
// by fametype ??
if (ImGui::CollapsingHeader("Sources", ImGuiTreeNodeFlags_DefaultOpen)) {
// list sources
if (ImGui::BeginTable("sources_and_sinks", 4, ImGuiTableFlags_SizingFixedFit | ImGuiTableFlags_BordersInnerV)) {
ImGui::TableSetupColumn("id");
ImGui::TableSetupColumn("name");
ImGui::TableSetupColumn("##conn");
ImGui::TableSetupColumn("type");
ImGui::TableHeadersRow();
for (const auto& [oc, ss] : _os.registry().view<Components::StreamSource>().each()) {
//ImGui::Text("src %d (%s)[%s]", entt::to_integral(entt::to_entity(oc)), ss.name.c_str(), ss.frame_type_name.c_str());
ImGui::PushID(entt::to_integral(oc));
ImGui::TableNextColumn();
ImGui::Text("%d", entt::to_integral(entt::to_entity(oc)));
if (_os.registry().all_of<Components::TagDefaultTarget>(oc)) {
ImGui::TableSetBgColor(ImGuiTableBgTarget_RowBg1, ImGui::GetColorU32(ImVec4{0.6f, 0.f, 0.6f, 0.25f}));
} else if (_os.registry().all_of<Components::TagConnectToDefault>(oc)) {
ImGui::TableSetBgColor(ImGuiTableBgTarget_RowBg1, ImGui::GetColorU32(ImVec4{0.6f, 0.6f, 0.f, 0.25f}));
}
const auto *ssrc = _os.registry().try_get<Components::StreamSource>(oc);
ImGui::TableNextColumn();
ImGui::TextUnformatted(ssrc!=nullptr?ssrc->name.c_str():"none");
ImGui::TableNextColumn();
if (ImGui::SmallButton("->")) {
ImGui::OpenPopup("src_connect");
}
if (ImGui::BeginPopup("src_connect")) {
if (ImGui::BeginMenu("connect to")) {
for (const auto& [oc_sink, s_sink] : _os.registry().view<Components::StreamSink>().each()) {
if (s_sink.frame_type_name != ss.frame_type_name) {
continue;
}
ImGui::PushID(entt::to_integral(oc_sink));
std::string sink_label {"src "};
sink_label += std::to_string(entt::to_integral(entt::to_entity(oc_sink)));
sink_label += " (";
sink_label += s_sink.name;
sink_label += ")[";
sink_label += s_sink.frame_type_name;
sink_label += "]";
if (ImGui::MenuItem(sink_label.c_str())) {
_sm.connect(oc, oc_sink);
}
ImGui::PopID();
}
ImGui::EndMenu();
}
ImGui::EndPopup();
}
ImGui::TableNextColumn();
ImGui::TextUnformatted(ssrc!=nullptr?ssrc->frame_type_name.c_str():"???");
ImGui::PopID();
}
ImGui::EndTable();
}
} // sources header
if (ImGui::CollapsingHeader("Sinks", ImGuiTreeNodeFlags_DefaultOpen)) {
// list sinks
if (ImGui::BeginTable("sources_and_sinks", 4, ImGuiTableFlags_SizingFixedFit | ImGuiTableFlags_BordersInnerV)) {
ImGui::TableSetupColumn("id");
ImGui::TableSetupColumn("name");
ImGui::TableSetupColumn("##conn");
ImGui::TableSetupColumn("type");
ImGui::TableHeadersRow();
for (const auto& [oc, ss] : _os.registry().view<Components::StreamSink>().each()) {
ImGui::PushID(entt::to_integral(oc));
ImGui::TableNextColumn();
ImGui::Text("%d", entt::to_integral(entt::to_entity(oc)));
if (_os.registry().all_of<Components::TagDefaultTarget>(oc)) {
ImGui::TableSetBgColor(ImGuiTableBgTarget_RowBg1, ImGui::GetColorU32(ImVec4{0.6f, 0.f, 0.6f, 0.25f}));
} else if (_os.registry().all_of<Components::TagConnectToDefault>(oc)) {
ImGui::TableSetBgColor(ImGuiTableBgTarget_RowBg1, ImGui::GetColorU32(ImVec4{0.6f, 0.6f, 0.f, 0.25f}));
}
const auto *ssink = _os.registry().try_get<Components::StreamSink>(oc);
ImGui::TableNextColumn();
ImGui::TextUnformatted(ssink!=nullptr?ssink->name.c_str():"none");
ImGui::TableNextColumn();
if (ImGui::SmallButton("->")) {
ImGui::OpenPopup("sink_connect");
}
// ImGuiWindowFlags_AlwaysAutoResize | ImGuiWindowFlags_NoTitleBar | ImGuiWindowFlags_NoSavedSettings
if (ImGui::BeginPopup("sink_connect")) {
if (ImGui::BeginMenu("connect to")) {
for (const auto& [oc_src, s_src] : _os.registry().view<Components::StreamSource>().each()) {
if (s_src.frame_type_name != ss.frame_type_name) {
continue;
}
ImGui::PushID(entt::to_integral(oc_src));
std::string source_label {"src "};
source_label += std::to_string(entt::to_integral(entt::to_entity(oc_src)));
source_label += " (";
source_label += s_src.name;
source_label += ")[";
source_label += s_src.frame_type_name;
source_label += "]";
if (ImGui::MenuItem(source_label.c_str())) {
_sm.connect(oc_src, oc);
}
ImGui::PopID();
}
ImGui::EndMenu();
}
ImGui::EndPopup();
}
ImGui::TableNextColumn();
ImGui::TextUnformatted(ssink!=nullptr?ssink->frame_type_name.c_str():"???");
ImGui::PopID();
}
ImGui::EndTable();
}
} // sink header
if (ImGui::CollapsingHeader("Connections", ImGuiTreeNodeFlags_DefaultOpen)) {
// list connections
if (ImGui::BeginTable("connections", 6, ImGuiTableFlags_SizingFixedFit | ImGuiTableFlags_BordersInnerV)) {
ImGui::TableSetupColumn("##id"); // TODO: remove?
ImGui::TableSetupColumn("##disco");
ImGui::TableSetupColumn("##qdesc");
ImGui::TableSetupColumn("from");
ImGui::TableSetupColumn("to");
ImGui::TableSetupColumn("type");
ImGui::TableHeadersRow();
for (size_t i = 0; i < _sm._connections.size(); i++) {
const auto& con = _sm._connections[i];
//ImGui::Text("con %d->%d", entt::to_integral(entt::to_entity(con->src.entity())), entt::to_integral(entt::to_entity(con->sink.entity())));
ImGui::PushID(i);
ImGui::TableNextColumn();
ImGui::Text("%zu", i); // do connections have ids?
ImGui::TableNextColumn();
if (ImGui::SmallButton("X")) {
con->stop = true;
}
ImGui::TableNextColumn();
ImGui::Text("%d->%d", entt::to_integral(entt::to_entity(con->src.entity())), entt::to_integral(entt::to_entity(con->sink.entity())));
const auto *ssrc = con->src.try_get<Components::StreamSource>();
ImGui::TableNextColumn();
ImGui::TextUnformatted(ssrc!=nullptr?ssrc->name.c_str():"none");
const auto *ssink = con->sink.try_get<Components::StreamSink>();
ImGui::TableNextColumn();
ImGui::TextUnformatted(ssink!=nullptr?ssink->name.c_str():"none");
ImGui::TableNextColumn();
ImGui::TextUnformatted(
(ssrc!=nullptr)?
ssrc->frame_type_name.c_str():
(ssink!=nullptr)?
ssink->frame_type_name.c_str()
:"???"
);
ImGui::PopID();
}
ImGui::EndTable();
}
} // con header
}
ImGui::End();
}

17
src/stream_manager_ui.hpp Normal file
View File

@ -0,0 +1,17 @@
#pragma once
#include <solanaceae/object_store/fwd.hpp>
#include "./frame_streams/stream_manager.hpp"
class StreamManagerUI {
ObjectStore2& _os;
StreamManager& _sm;
bool _show_window {false};
public:
StreamManagerUI(ObjectStore2& os, StreamManager& sm);
void render(void);
};

View File

@ -2,81 +2,239 @@
#include <cassert> #include <cassert>
#include <cstdint>
#include <iostream>
// https://almogfx.bandcamp.com/track/crushed-w-cassade // https://almogfx.bandcamp.com/track/crushed-w-cassade
ToxAV::ToxAV(Tox* tox) : _tox(tox) { ToxAVI::ToxAVI(Tox* tox) : _tox(tox) {
Toxav_Err_New err_new {TOXAV_ERR_NEW_OK}; Toxav_Err_New err_new {TOXAV_ERR_NEW_OK};
_tox_av = toxav_new(_tox, &err_new); _tox_av = toxav_new(_tox, &err_new);
// TODO: throw // TODO: throw
assert(err_new == TOXAV_ERR_NEW_OK); assert(err_new == TOXAV_ERR_NEW_OK);
toxav_callback_call(
_tox_av,
+[](ToxAV*, uint32_t friend_number, bool audio_enabled, bool video_enabled, void *user_data) {
assert(user_data != nullptr);
static_cast<ToxAVI*>(user_data)->cb_call(friend_number, audio_enabled, video_enabled);
},
this
);
toxav_callback_call_state(
_tox_av,
+[](ToxAV*, uint32_t friend_number, uint32_t state, void *user_data) {
assert(user_data != nullptr);
static_cast<ToxAVI*>(user_data)->cb_call_state(friend_number, state);
},
this
);
toxav_callback_audio_bit_rate(
_tox_av,
+[](ToxAV*, uint32_t friend_number, uint32_t audio_bit_rate, void *user_data) {
assert(user_data != nullptr);
static_cast<ToxAVI*>(user_data)->cb_audio_bit_rate(friend_number, audio_bit_rate);
},
this
);
toxav_callback_video_bit_rate(
_tox_av,
+[](ToxAV*, uint32_t friend_number, uint32_t video_bit_rate, void *user_data) {
assert(user_data != nullptr);
static_cast<ToxAVI*>(user_data)->cb_video_bit_rate(friend_number, video_bit_rate);
},
this
);
toxav_callback_audio_receive_frame(
_tox_av,
+[](ToxAV*, uint32_t friend_number, const int16_t pcm[], size_t sample_count, uint8_t channels, uint32_t sampling_rate, void *user_data) {
assert(user_data != nullptr);
static_cast<ToxAVI*>(user_data)->cb_audio_receive_frame(friend_number, pcm, sample_count, channels, sampling_rate);
},
this
);
toxav_callback_video_receive_frame(
_tox_av,
+[](ToxAV*, uint32_t friend_number,
uint16_t width, uint16_t height,
const uint8_t y[/*! max(width, abs(ystride)) * height */],
const uint8_t u[/*! max(width/2, abs(ustride)) * (height/2) */],
const uint8_t v[/*! max(width/2, abs(vstride)) * (height/2) */],
int32_t ystride, int32_t ustride, int32_t vstride,
void *user_data
) {
assert(user_data != nullptr);
static_cast<ToxAVI*>(user_data)->cb_video_receive_frame(friend_number, width, height, y, u, v, ystride, ustride, vstride);
},
this
);
} }
ToxAV::~ToxAV(void) {
ToxAVI::~ToxAVI(void) {
toxav_kill(_tox_av); toxav_kill(_tox_av);
} }
uint32_t ToxAV::toxavIterationInterval(void) const { uint32_t ToxAVI::toxavIterationInterval(void) const {
return toxav_iteration_interval(_tox_av); return toxav_iteration_interval(_tox_av);
} }
void ToxAV::toxavIterate(void) { void ToxAVI::toxavIterate(void) {
toxav_iterate(_tox_av); toxav_iterate(_tox_av);
} }
uint32_t ToxAV::toxavAudioIterationInterval(void) const { uint32_t ToxAVI::toxavAudioIterationInterval(void) const {
return toxav_audio_iteration_interval(_tox_av); return toxav_audio_iteration_interval(_tox_av);
} }
void ToxAV::toxavAudioIterate(void) { void ToxAVI::toxavAudioIterate(void) {
toxav_audio_iterate(_tox_av); toxav_audio_iterate(_tox_av);
} }
uint32_t ToxAV::toxavVideoIterationInterval(void) const { uint32_t ToxAVI::toxavVideoIterationInterval(void) const {
return toxav_video_iteration_interval(_tox_av); return toxav_video_iteration_interval(_tox_av);
} }
void ToxAV::toxavVideoIterate(void) { void ToxAVI::toxavVideoIterate(void) {
toxav_video_iterate(_tox_av); toxav_video_iterate(_tox_av);
} }
Toxav_Err_Call ToxAV::toxavCall(uint32_t friend_number, uint32_t audio_bit_rate, uint32_t video_bit_rate) { Toxav_Err_Call ToxAVI::toxavCall(uint32_t friend_number, uint32_t audio_bit_rate, uint32_t video_bit_rate) {
Toxav_Err_Call err {TOXAV_ERR_CALL_OK}; Toxav_Err_Call err {TOXAV_ERR_CALL_OK};
toxav_call(_tox_av, friend_number, audio_bit_rate, video_bit_rate, &err); toxav_call(_tox_av, friend_number, audio_bit_rate, video_bit_rate, &err);
return err; return err;
} }
Toxav_Err_Answer ToxAV::toxavAnswer(uint32_t friend_number, uint32_t audio_bit_rate, uint32_t video_bit_rate) { Toxav_Err_Answer ToxAVI::toxavAnswer(uint32_t friend_number, uint32_t audio_bit_rate, uint32_t video_bit_rate) {
Toxav_Err_Answer err {TOXAV_ERR_ANSWER_OK}; Toxav_Err_Answer err {TOXAV_ERR_ANSWER_OK};
toxav_answer(_tox_av, friend_number, audio_bit_rate, video_bit_rate, &err); toxav_answer(_tox_av, friend_number, audio_bit_rate, video_bit_rate, &err);
return err; return err;
} }
Toxav_Err_Call_Control ToxAV::toxavCallControl(uint32_t friend_number, Toxav_Call_Control control) { Toxav_Err_Call_Control ToxAVI::toxavCallControl(uint32_t friend_number, Toxav_Call_Control control) {
Toxav_Err_Call_Control err {TOXAV_ERR_CALL_CONTROL_OK}; Toxav_Err_Call_Control err {TOXAV_ERR_CALL_CONTROL_OK};
toxav_call_control(_tox_av, friend_number, control, &err); toxav_call_control(_tox_av, friend_number, control, &err);
return err; return err;
} }
Toxav_Err_Send_Frame ToxAV::toxavAudioSendFrame(uint32_t friend_number, const int16_t pcm[], size_t sample_count, uint8_t channels, uint32_t sampling_rate) { Toxav_Err_Send_Frame ToxAVI::toxavAudioSendFrame(uint32_t friend_number, const int16_t pcm[], size_t sample_count, uint8_t channels, uint32_t sampling_rate) {
Toxav_Err_Send_Frame err {TOXAV_ERR_SEND_FRAME_OK}; Toxav_Err_Send_Frame err {TOXAV_ERR_SEND_FRAME_OK};
toxav_audio_send_frame(_tox_av, friend_number, pcm, sample_count, channels, sampling_rate, &err); toxav_audio_send_frame(_tox_av, friend_number, pcm, sample_count, channels, sampling_rate, &err);
return err; return err;
} }
Toxav_Err_Bit_Rate_Set ToxAV::toxavAudioSetBitRate(uint32_t friend_number, uint32_t bit_rate) { Toxav_Err_Bit_Rate_Set ToxAVI::toxavAudioSetBitRate(uint32_t friend_number, uint32_t bit_rate) {
Toxav_Err_Bit_Rate_Set err {TOXAV_ERR_BIT_RATE_SET_OK}; Toxav_Err_Bit_Rate_Set err {TOXAV_ERR_BIT_RATE_SET_OK};
toxav_audio_set_bit_rate(_tox_av, friend_number, bit_rate, &err); toxav_audio_set_bit_rate(_tox_av, friend_number, bit_rate, &err);
return err; return err;
} }
Toxav_Err_Send_Frame ToxAV::toxavVideoSendFrame(uint32_t friend_number, uint16_t width, uint16_t height, const uint8_t y[], const uint8_t u[], const uint8_t v[]) { Toxav_Err_Send_Frame ToxAVI::toxavVideoSendFrame(uint32_t friend_number, uint16_t width, uint16_t height, const uint8_t y[], const uint8_t u[], const uint8_t v[]) {
Toxav_Err_Send_Frame err {TOXAV_ERR_SEND_FRAME_OK}; Toxav_Err_Send_Frame err {TOXAV_ERR_SEND_FRAME_OK};
toxav_video_send_frame(_tox_av, friend_number, width, height, y, u, v, &err); toxav_video_send_frame(_tox_av, friend_number, width, height, y, u, v, &err);
return err; return err;
} }
Toxav_Err_Bit_Rate_Set ToxAV::toxavVideoSetBitRate(uint32_t friend_number, uint32_t bit_rate) { Toxav_Err_Bit_Rate_Set ToxAVI::toxavVideoSetBitRate(uint32_t friend_number, uint32_t bit_rate) {
Toxav_Err_Bit_Rate_Set err {TOXAV_ERR_BIT_RATE_SET_OK}; Toxav_Err_Bit_Rate_Set err {TOXAV_ERR_BIT_RATE_SET_OK};
toxav_video_set_bit_rate(_tox_av, friend_number, bit_rate, &err); toxav_video_set_bit_rate(_tox_av, friend_number, bit_rate, &err);
return err; return err;
} }
void ToxAVI::cb_call(uint32_t friend_number, bool audio_enabled, bool video_enabled) {
std::cerr << "TOXAV: receiving call f:" << friend_number << " a:" << audio_enabled << " v:" << video_enabled << "\n";
//Toxav_Err_Answer err_answer { TOXAV_ERR_ANSWER_OK };
//toxav_answer(_tox_av, friend_number, 0, 0, &err_answer);
//if (err_answer != TOXAV_ERR_ANSWER_OK) {
// std::cerr << "!!!!!!!! answer failed " << err_answer << "\n";
//}
dispatch(
ToxAV_Event::friend_call,
Events::FriendCall{
friend_number,
audio_enabled,
video_enabled,
}
);
}
void ToxAVI::cb_call_state(uint32_t friend_number, uint32_t state) {
//ToxAVFriendCallState w_state{state};
//w_state.is_error();
std::cerr << "TOXAV: call state f:" << friend_number << " s:" << state << "\n";
dispatch(
ToxAV_Event::friend_call_state,
Events::FriendCallState{
friend_number,
state,
}
);
}
void ToxAVI::cb_audio_bit_rate(uint32_t friend_number, uint32_t audio_bit_rate) {
std::cerr << "TOXAV: audio bitrate f:" << friend_number << " abr:" << audio_bit_rate << "\n";
dispatch(
ToxAV_Event::friend_audio_bitrate,
Events::FriendAudioBitrate{
friend_number,
audio_bit_rate,
}
);
}
void ToxAVI::cb_video_bit_rate(uint32_t friend_number, uint32_t video_bit_rate) {
std::cerr << "TOXAV: video bitrate f:" << friend_number << " vbr:" << video_bit_rate << "\n";
dispatch(
ToxAV_Event::friend_video_bitrate,
Events::FriendVideoBitrate{
friend_number,
video_bit_rate,
}
);
}
void ToxAVI::cb_audio_receive_frame(uint32_t friend_number, const int16_t pcm[], size_t sample_count, uint8_t channels, uint32_t sampling_rate) {
//std::cerr << "TOXAV: audio frame f:" << friend_number << " sc:" << sample_count << " ch:" << (int)channels << " sr:" << sampling_rate << "\n";
dispatch(
ToxAV_Event::friend_audio_frame,
Events::FriendAudioFrame{
friend_number,
Span<int16_t>(pcm, sample_count*channels), // TODO: is sample count *ch or /ch?
channels,
sampling_rate,
}
);
}
void ToxAVI::cb_video_receive_frame(
uint32_t friend_number,
uint16_t width, uint16_t height,
const uint8_t y[/*! max(width, abs(ystride)) * height */],
const uint8_t u[/*! max(width/2, abs(ustride)) * (height/2) */],
const uint8_t v[/*! max(width/2, abs(vstride)) * (height/2) */],
int32_t ystride, int32_t ustride, int32_t vstride
) {
//std::cerr << "TOXAV: video frame f:" << friend_number << " w:" << width << " h:" << height << "\n";
dispatch(
ToxAV_Event::friend_video_frame,
Events::FriendVideoFrame{
friend_number,
width,
height,
Span<uint8_t>(y, std::max<int64_t>(width, std::abs(ystride)) * height),
Span<uint8_t>(u, std::max<int64_t>(width/2, std::abs(ustride)) * (height/2)),
Span<uint8_t>(v, std::max<int64_t>(width/2, std::abs(vstride)) * (height/2)),
ystride,
ustride,
vstride,
}
);
}

View File

@ -1,15 +1,99 @@
#pragma once #pragma once
#include <solanaceae/util/span.hpp>
#include <solanaceae/util/event_provider.hpp>
#include <tox/toxav.h> #include <tox/toxav.h>
struct ToxAV { namespace /*toxav*/ Events {
struct FriendCall {
uint32_t friend_number;
bool audio_enabled;
bool video_enabled;
};
struct FriendCallState {
uint32_t friend_number;
uint32_t state;
};
struct FriendAudioBitrate {
uint32_t friend_number;
uint32_t audio_bit_rate;
};
struct FriendVideoBitrate {
uint32_t friend_number;
uint32_t video_bit_rate;
};
struct FriendAudioFrame {
uint32_t friend_number;
Span<int16_t> pcm;
//size_t sample_count;
uint8_t channels;
uint32_t sampling_rate;
};
struct FriendVideoFrame {
uint32_t friend_number;
uint16_t width;
uint16_t height;
//const uint8_t y[[>! max(width, abs(ystride)) * height <]];
//const uint8_t u[[>! max(width/2, abs(ustride)) * (height/2) <]];
//const uint8_t v[[>! max(width/2, abs(vstride)) * (height/2) <]];
// mdspan would be nice here
// bc of the stride, span might be larger than the actual data it contains
Span<uint8_t> y;
Span<uint8_t> u;
Span<uint8_t> v;
int32_t ystride;
int32_t ustride;
int32_t vstride;
};
} // Event
enum class ToxAV_Event : uint32_t {
friend_call,
friend_call_state,
friend_audio_bitrate,
friend_video_bitrate,
friend_audio_frame,
friend_video_frame,
MAX
};
struct ToxAVEventI {
using enumType = ToxAV_Event;
virtual ~ToxAVEventI(void) {}
virtual bool onEvent(const Events::FriendCall&) { return false; }
virtual bool onEvent(const Events::FriendCallState&) { return false; }
virtual bool onEvent(const Events::FriendAudioBitrate&) { return false; }
virtual bool onEvent(const Events::FriendVideoBitrate&) { return false; }
virtual bool onEvent(const Events::FriendAudioFrame&) { return false; }
virtual bool onEvent(const Events::FriendVideoFrame&) { return false; }
};
using ToxAVEventProviderI = EventProviderI<ToxAVEventI>;
// TODO: seperate out implementation from interface
struct ToxAVI : public ToxAVEventProviderI {
Tox* _tox = nullptr; Tox* _tox = nullptr;
ToxAV* _tox_av = nullptr; ToxAV* _tox_av = nullptr;
ToxAV(Tox* tox); static constexpr const char* version {"0"};
virtual ~ToxAV(void);
ToxAVI(Tox* tox);
virtual ~ToxAVI(void);
// interface // interface
// if iterate is called on a different thread, it will fire events there
uint32_t toxavIterationInterval(void) const; uint32_t toxavIterationInterval(void) const;
void toxavIterate(void); void toxavIterate(void);
@ -33,5 +117,32 @@ struct ToxAV {
//int32_t toxav_groupchat_disable_av(Tox *tox, uint32_t groupnumber); //int32_t toxav_groupchat_disable_av(Tox *tox, uint32_t groupnumber);
//bool toxav_groupchat_av_enabled(Tox *tox, uint32_t groupnumber); //bool toxav_groupchat_av_enabled(Tox *tox, uint32_t groupnumber);
// toxav callbacks
void cb_call(uint32_t friend_number, bool audio_enabled, bool video_enabled);
void cb_call_state(uint32_t friend_number, uint32_t state);
void cb_audio_bit_rate(uint32_t friend_number, uint32_t audio_bit_rate);
void cb_video_bit_rate(uint32_t friend_number, uint32_t video_bit_rate);
void cb_audio_receive_frame(uint32_t friend_number, const int16_t pcm[], size_t sample_count, uint8_t channels, uint32_t sampling_rate);
void cb_video_receive_frame(
uint32_t friend_number,
uint16_t width, uint16_t height,
const uint8_t y[/*! max(width, abs(ystride)) * height */],
const uint8_t u[/*! max(width/2, abs(ustride)) * (height/2) */],
const uint8_t v[/*! max(width/2, abs(vstride)) * (height/2) */],
int32_t ystride, int32_t ustride, int32_t vstride
);
};
struct ToxAVFriendCallState final {
const uint32_t state {TOXAV_FRIEND_CALL_STATE_NONE};
[[nodiscard]] bool is_error(void) const { return state & TOXAV_FRIEND_CALL_STATE_ERROR; }
[[nodiscard]] bool is_finished(void) const { return state & TOXAV_FRIEND_CALL_STATE_FINISHED; }
[[nodiscard]] bool is_sending_a(void) const { return state & TOXAV_FRIEND_CALL_STATE_SENDING_A; }
[[nodiscard]] bool is_sending_v(void) const { return state & TOXAV_FRIEND_CALL_STATE_SENDING_V; }
[[nodiscard]] bool is_accepting_a(void) const { return state & TOXAV_FRIEND_CALL_STATE_ACCEPTING_A; }
[[nodiscard]] bool is_accepting_v(void) const { return state & TOXAV_FRIEND_CALL_STATE_ACCEPTING_V; }
}; };

476
src/tox_av_voip_model.cpp Normal file
View File

@ -0,0 +1,476 @@
#include "./tox_av_voip_model.hpp"
#include <solanaceae/object_store/object_store.hpp>
#include <solanaceae/tox_contacts/components.hpp>
#include "./frame_streams/stream_manager.hpp"
#include "./frame_streams/audio_stream2.hpp"
#include "./frame_streams/locked_frame_stream.hpp"
#include "./frame_streams/multi_source.hpp"
#include "./frame_streams/audio_stream_pop_reframer.hpp"
#include <iostream>
namespace Components {
struct ToxAVIncomingAV {
bool incoming_audio {false};
bool incoming_video {false};
};
struct ToxAVAudioSink {
ObjectHandle o;
// ptr?
};
// vid
struct ToxAVAudioSource {
ObjectHandle o;
// ptr?
};
// vid
} // Components
struct ToxAVCallAudioSink : public FrameStream2SinkI<AudioFrame2> {
ToxAVI& _toxav;
// bitrate for enabled state
uint32_t _audio_bitrate {32};
uint32_t _fid;
std::shared_ptr<AudioStreamPopReFramer<LockedFrameStream2<AudioFrame2>>> _writer;
ToxAVCallAudioSink(ToxAVI& toxav, uint32_t fid) : _toxav(toxav), _fid(fid) {}
~ToxAVCallAudioSink(void) {
if (_writer) {
_writer = nullptr;
_toxav.toxavAudioSetBitRate(_fid, 0);
}
}
// sink
std::shared_ptr<FrameStream2I<AudioFrame2>> subscribe(void) override {
if (_writer) {
// max 1 (exclusive for now)
return nullptr;
}
auto err = _toxav.toxavAudioSetBitRate(_fid, _audio_bitrate);
if (err != TOXAV_ERR_BIT_RATE_SET_OK) {
return nullptr;
}
// 20ms for now, 10ms would work too, further investigate stutters at 5ms (probably too slow interval rate)
_writer = std::make_shared<AudioStreamPopReFramer<LockedFrameStream2<AudioFrame2>>>(20);
return _writer;
}
bool unsubscribe(const std::shared_ptr<FrameStream2I<AudioFrame2>>& sub) override {
if (!sub || !_writer) {
// nah
return false;
}
if (sub == _writer) {
_writer = nullptr;
/*auto err = */_toxav.toxavAudioSetBitRate(_fid, 0);
// print warning? on error?
return true;
}
// what
return false;
}
};
void ToxAVVoIPModel::addAudioSource(ObjectHandle session, uint32_t friend_number) {
auto& stream_source = session.get_or_emplace<Components::VoIP::StreamSources>().streams;
ObjectHandle incoming_audio {_os.registry(), _os.registry().create()};
auto new_asrc = std::make_unique<FrameStream2MultiSource<AudioFrame2>>();
incoming_audio.emplace<FrameStream2MultiSource<AudioFrame2>*>(new_asrc.get());
incoming_audio.emplace<Components::FrameStream2Source<AudioFrame2>>(std::move(new_asrc));
incoming_audio.emplace<Components::StreamSource>(Components::StreamSource::create<AudioFrame2>("ToxAV Friend Call Incoming Audio"));
std::cout << "new incoming audio\n";
if (
const auto* defaults = session.try_get<Components::VoIP::DefaultConfig>();
defaults != nullptr && defaults->incoming_audio
) {
incoming_audio.emplace<Components::TagConnectToDefault>(); // depends on what was specified in enter()
std::cout << "with default\n";
}
stream_source.push_back(incoming_audio);
session.emplace<Components::ToxAVAudioSource>(incoming_audio);
// TODO: tie session to stream
_audio_sources[friend_number] = incoming_audio;
_os.throwEventConstruct(incoming_audio);
}
void ToxAVVoIPModel::addAudioSink(ObjectHandle session, uint32_t friend_number) {
auto& stream_sinks = session.get_or_emplace<Components::VoIP::StreamSinks>().streams;
ObjectHandle outgoing_audio {_os.registry(), _os.registry().create()};
auto new_asink = std::make_unique<ToxAVCallAudioSink>(_av, friend_number);
outgoing_audio.emplace<ToxAVCallAudioSink*>(new_asink.get());
outgoing_audio.emplace<Components::FrameStream2Sink<AudioFrame2>>(std::move(new_asink));
outgoing_audio.emplace<Components::StreamSink>(Components::StreamSink::create<AudioFrame2>("ToxAV Friend Call Outgoing Audio"));
if (
const auto* defaults = session.try_get<Components::VoIP::DefaultConfig>();
defaults != nullptr && defaults->outgoing_audio
) {
outgoing_audio.emplace<Components::TagConnectToDefault>(); // depends on what was specified in enter()
}
stream_sinks.push_back(outgoing_audio);
session.emplace<Components::ToxAVAudioSink>(outgoing_audio);
// TODO: tie session to stream
_os.throwEventConstruct(outgoing_audio);
}
void ToxAVVoIPModel::destroySession(ObjectHandle session) {
if (!static_cast<bool>(session)) {
return;
}
// remove lookup
if (session.all_of<Components::ToxAVAudioSource>()) {
auto it_asrc = std::find_if(
_audio_sources.cbegin(), _audio_sources.cend(),
[o = session.get<Components::ToxAVAudioSource>().o](const auto& it) {
return it.second == o;
}
);
if (it_asrc != _audio_sources.cend()) {
_audio_sources.erase(it_asrc);
}
}
// destory sources
if (auto* ss = session.try_get<Components::VoIP::StreamSources>(); ss != nullptr) {
for (const auto ssov : ss->streams) {
_os.throwEventDestroy(ssov);
_os.registry().destroy(ssov);
}
}
// destory sinks
if (auto* ss = session.try_get<Components::VoIP::StreamSinks>(); ss != nullptr) {
for (const auto ssov : ss->streams) {
_os.throwEventDestroy(ssov);
_os.registry().destroy(ssov);
}
}
// destory session
_os.throwEventDestroy(session);
_os.registry().destroy(session);
}
ToxAVVoIPModel::ToxAVVoIPModel(ObjectStore2& os, ToxAVI& av, Contact3Registry& cr, ToxContactModel2& tcm) :
_os(os), _av(av), _cr(cr), _tcm(tcm)
{
_av.subscribe(this, ToxAV_Event::friend_call);
_av.subscribe(this, ToxAV_Event::friend_call_state);
_av.subscribe(this, ToxAV_Event::friend_audio_bitrate);
_av.subscribe(this, ToxAV_Event::friend_video_bitrate);
_av.subscribe(this, ToxAV_Event::friend_audio_frame);
_av.subscribe(this, ToxAV_Event::friend_video_frame);
// attach to all tox friend contacts
for (const auto& [cv, _] : _cr.view<Contact::Components::ToxFriendPersistent>().each()) {
_cr.emplace<VoIPModelI*>(cv, this);
}
// TODO: events
}
ToxAVVoIPModel::~ToxAVVoIPModel(void) {
for (const auto& [ov, voipmodel] : _os.registry().view<VoIPModelI*>().each()) {
if (voipmodel == this) {
destroySession(_os.objectHandle(ov));
}
}
}
void ToxAVVoIPModel::tick(void) {
for (const auto& [oc, asink] : _os.registry().view<ToxAVCallAudioSink*>().each()) {
if (!asink->_writer) {
continue;
}
for (size_t i = 0; i < 100; i++) {
auto new_frame_opt = asink->_writer->pop();
if (!new_frame_opt.has_value()) {
break;
}
const auto& new_frame = new_frame_opt.value();
//* @param sample_count Number of samples in this frame. Valid numbers here are
//* `((sample rate) * (audio length) / 1000)`, where audio length can be
//* 2.5, 5, 10, 20, 40 or 60 milliseconds.
// we likely needs to subdivide/repackage
// frame size should be an option exposed to the user
// with 10ms as a default ?
// the larger the frame size, the less overhead but the more delay
auto err = _av.toxavAudioSendFrame(
asink->_fid,
new_frame.getSpan().ptr,
new_frame.getSpan().size / new_frame.channels,
new_frame.channels,
new_frame.sample_rate
);
if (err != TOXAV_ERR_SEND_FRAME_OK) {
std::cerr << "DTC: failed to send audio frame " << err << "\n";
}
}
}
}
ObjectHandle ToxAVVoIPModel::enter(const Contact3 c, const Components::VoIP::DefaultConfig& defaults) {
if (!_cr.all_of<Contact::Components::ToxFriendEphemeral>(c)) {
return {};
}
const auto friend_number = _cr.get<Contact::Components::ToxFriendEphemeral>(c).friend_number;
const auto err = _av.toxavCall(friend_number, 0, 0);
if (err != TOXAV_ERR_CALL_OK) {
std::cerr << "TAVVOIP error: failed to start call: " << err << "\n";
return {};
}
ObjectHandle new_session {_os.registry(), _os.registry().create()};
new_session.emplace<VoIPModelI*>(this);
new_session.emplace<Components::VoIP::TagVoIPSession>(); // ??
new_session.emplace<Components::VoIP::SessionContact>(c);
new_session.emplace<Components::VoIP::SessionState>().state = Components::VoIP::SessionState::State::RINGING;
new_session.emplace<Components::VoIP::DefaultConfig>(defaults);
_os.throwEventConstruct(new_session);
return new_session;
}
bool ToxAVVoIPModel::accept(ObjectHandle session, const Components::VoIP::DefaultConfig& defaults) {
if (!static_cast<bool>(session)) {
return false;
}
if (!session.all_of<
Components::VoIP::TagVoIPSession,
VoIPModelI*,
Components::VoIP::SessionContact,
Components::VoIP::Incoming
>()) {
return false;
}
// check if self
if (session.get<VoIPModelI*>() != this) {
return false;
}
const auto session_contact = session.get<Components::VoIP::SessionContact>().c;
if (!_cr.all_of<Contact::Components::ToxFriendEphemeral>(session_contact)) {
return false;
}
const auto friend_number = _cr.get<Contact::Components::ToxFriendEphemeral>(session_contact).friend_number;
auto err = _av.toxavAnswer(friend_number, 0, 0);
if (err != TOXAV_ERR_ANSWER_OK) {
std::cerr << "TOXAVVOIP error: ansering call failed: " << err << "\n";
// we simply let it be for now, it apears we can try ansering later again
// we also get an error here when the call is already in progress (:
return false;
}
session.emplace<Components::VoIP::DefaultConfig>(defaults);
// answer defaults to enabled receiving audio and video
// TODO: think about how we should handle this
// set to disabled? and enable on src connection?
// we already default disabled send and enabled on sink connection
//_av.toxavCallControl(friend_number, TOXAV_CALL_CONTROL_HIDE_VIDEO);
//_av.toxavCallControl(friend_number, TOXAV_CALL_CONTROL_MUTE_AUDIO);
// how do we know the other side is accepting audio
// bitrate cb or what?
assert(!session.all_of<Components::ToxAVAudioSink>());
addAudioSink(session, friend_number);
if (const auto* i_av = session.try_get<Components::ToxAVIncomingAV>(); i_av != nullptr) {
// create audio src
if (i_av->incoming_audio) {
assert(!session.all_of<Components::ToxAVAudioSource>());
addAudioSource(session, friend_number);
}
// create video src
if (i_av->incoming_video) {
}
}
session.get_or_emplace<Components::VoIP::SessionState>().state = Components::VoIP::SessionState::State::CONNECTED;
_os.throwEventUpdate(session);
return true;
}
bool ToxAVVoIPModel::leave(ObjectHandle session) {
// rename to end?
if (!static_cast<bool>(session)) {
return false;
}
if (!session.all_of<
Components::VoIP::TagVoIPSession,
VoIPModelI*,
Components::VoIP::SessionContact
>()) {
return false;
}
// check if self
if (session.get<VoIPModelI*>() != this) {
return false;
}
const auto session_contact = session.get<Components::VoIP::SessionContact>().c;
if (!_cr.all_of<Contact::Components::ToxFriendEphemeral>(session_contact)) {
return false;
}
const auto friend_number = _cr.get<Contact::Components::ToxFriendEphemeral>(session_contact).friend_number;
// check error? (we delete anyway)
_av.toxavCallControl(friend_number, Toxav_Call_Control::TOXAV_CALL_CONTROL_CANCEL);
destroySession(session);
return true;
}
bool ToxAVVoIPModel::onEvent(const Events::FriendCall& e) {
// new incoming call, create voip session, ready to be accepted
// (or rejected...)
const auto session_contact = _tcm.getContactFriend(e.friend_number);
if (!_cr.valid(session_contact)) {
return false;
}
ObjectHandle new_session {_os.registry(), _os.registry().create()};
new_session.emplace<VoIPModelI*>(this);
new_session.emplace<Components::VoIP::TagVoIPSession>(); // ??
new_session.emplace<Components::VoIP::Incoming>(session_contact); // in 1on1 its always the same contact, might leave blank
new_session.emplace<Components::VoIP::SessionContact>(session_contact);
new_session.emplace<Components::VoIP::SessionState>().state = Components::VoIP::SessionState::State::RINGING;
new_session.emplace<Components::ToxAVIncomingAV>(e.audio_enabled, e.video_enabled);
_os.throwEventConstruct(new_session);
return true;
}
bool ToxAVVoIPModel::onEvent(const Events::FriendCallState& e) {
const auto session_contact = _tcm.getContactFriend(e.friend_number);
if (!_cr.valid(session_contact)) {
return false;
}
ToxAVFriendCallState s{e.state};
// find session(s?)
// TODO: keep lookup table
for (const auto& [ov, voipmodel] : _os.registry().view<VoIPModelI*>().each()) {
if (voipmodel == this) {
auto o = _os.objectHandle(ov);
if (!o.all_of<Components::VoIP::SessionContact>()) {
continue;
}
if (session_contact != o.get<Components::VoIP::SessionContact>().c) {
continue;
}
if (s.is_error() || s.is_finished()) {
// destroy call
destroySession(o);
} else {
// remote accepted our call, or av send/recv conditions changed?
o.get<Components::VoIP::SessionState>().state = Components::VoIP::SessionState::State::CONNECTED; // set to in call ??
if (s.is_accepting_a() && !o.all_of<Components::ToxAVAudioSink>()) {
addAudioSink(o, e.friend_number);
} else if (!s.is_accepting_a() && o.all_of<Components::ToxAVAudioSink>()) {
// remove asink?
}
// video
// add/update sources
// audio
if (s.is_sending_a() && !o.all_of<Components::ToxAVAudioSource>()) {
addAudioSource(o, e.friend_number);
} else if (!s.is_sending_a() && o.all_of<Components::ToxAVAudioSource>()) {
// remove asrc?
}
// video
}
}
}
return true;
}
bool ToxAVVoIPModel::onEvent(const Events::FriendAudioBitrate&) {
return false;
}
bool ToxAVVoIPModel::onEvent(const Events::FriendVideoBitrate&) {
return false;
}
bool ToxAVVoIPModel::onEvent(const Events::FriendAudioFrame& e) {
auto asrc_it = _audio_sources.find(e.friend_number);
if (asrc_it == _audio_sources.cend()) {
// missing src from lookup table
return false;
}
auto asrc = asrc_it->second;
if (!static_cast<bool>(asrc)) {
// missing src to put frame into ??
return false;
}
assert(asrc.all_of<FrameStream2MultiSource<AudioFrame2>*>());
assert(asrc.all_of<Components::FrameStream2Source<AudioFrame2>>());
asrc.get<FrameStream2MultiSource<AudioFrame2>*>()->push(AudioFrame2{
e.sampling_rate,
e.channels,
std::vector<int16_t>(e.pcm.begin(), e.pcm.end()) // copy
});
return true;
}
bool ToxAVVoIPModel::onEvent(const Events::FriendVideoFrame&) {
return false;
}

46
src/tox_av_voip_model.hpp Normal file
View File

@ -0,0 +1,46 @@
#pragma once
#include <solanaceae/object_store/fwd.hpp>
#include <solanaceae/contact/contact_model3.hpp>
#include <solanaceae/tox_contacts/tox_contact_model2.hpp>
#include "./frame_streams/voip_model.hpp"
#include "./tox_av.hpp"
#include <unordered_map>
class ToxAVVoIPModel : protected ToxAVEventI, public VoIPModelI {
ObjectStore2& _os;
ToxAVI& _av;
Contact3Registry& _cr;
ToxContactModel2& _tcm;
// for faster lookup
std::unordered_map<uint32_t, ObjectHandle> _audio_sources;
// TODO: virtual? strategy? protected?
virtual void addAudioSource(ObjectHandle session, uint32_t friend_number);
virtual void addAudioSink(ObjectHandle session, uint32_t friend_number);
// TODO: video
void destroySession(ObjectHandle session);
public:
ToxAVVoIPModel(ObjectStore2& os, ToxAVI& av, Contact3Registry& cr, ToxContactModel2& tcm);
~ToxAVVoIPModel(void);
void tick(void);
public: // voip model
ObjectHandle enter(const Contact3 c, const Components::VoIP::DefaultConfig& defaults) override;
bool accept(ObjectHandle session, const Components::VoIP::DefaultConfig& defaults) override;
bool leave(ObjectHandle session) override;
protected: // toxav events
bool onEvent(const Events::FriendCall&) override;
bool onEvent(const Events::FriendCallState&) override;
bool onEvent(const Events::FriendAudioBitrate&) override;
bool onEvent(const Events::FriendVideoBitrate&) override;
bool onEvent(const Events::FriendAudioFrame&) override;
bool onEvent(const Events::FriendVideoFrame&) override;
};