124 Commits

Author SHA1 Message Date
34dc01d4dc fix chunk picker round robin actually working
(worked kinda bc of bug before)
make priority dynamic and fix skipping to make it work too
2024-11-22 13:51:48 +01:00
1bf1fbce75 small hs progress 2024-11-22 13:51:25 +01:00
37b92f67c8 respect voice state for receiving file messages
we should also check on send in the future
2024-11-06 10:50:42 +01:00
01c892df8c tweak finishing timer 2024-11-06 10:49:24 +01:00
6eb5826616 split recv and send, they dont share any code (probably) 2024-11-03 18:21:02 +01:00
2e6b15e4ad more hs drafting 2024-11-01 11:31:05 +01:00
63de78aaeb add spec draft to repo 2024-10-31 15:46:48 +01:00
2a0350a564 minor tweaks and fixes
especially preventing a stall on some packetloss scenarios
2024-10-31 11:39:16 +01:00
ee593536a2 boilerplate for hs2 2024-10-30 17:12:05 +01:00
6f2fa60394 handle init2 in ft1 (hacky) 2024-10-30 11:25:33 +01:00
96041fbcec add ft1_init2 and ft1_init_ack v3 2024-10-30 11:08:46 +01:00
b4eaf86ed1 dynamically choose chunk size 2024-10-28 23:23:05 +01:00
c7485c4577 use sr 2024-10-24 14:00:16 +02:00
51396314d1 big fix for not using the found free slot index!!
minor logging and tweaking changes
2024-10-24 11:50:03 +02:00
4aab6e489d fix uint64 cast to size_t 2024-10-23 14:11:26 +02:00
5e884fd3ee dont delay recv_done and check on init info 2024-10-23 13:46:00 +02:00
9a4be575ba minor stuff and logging 2024-10-23 12:51:22 +02:00
fd094b157f more 32bit stuff 2024-10-20 18:37:01 +02:00
c3c2d0f133 fixes for 32bit 2024-10-20 16:35:55 +02:00
4360b65309 update to rmmi 2024-10-06 11:37:07 +02:00
1d7416efed fix rare assert 2024-08-08 23:52:36 +02:00
a761378dd9 add p2prng packet to ext 2024-08-07 11:07:18 +02:00
60d6f27a12 disable sending timeout assert, we believe the underlying ngcft1 2024-08-04 10:15:26 +02:00
07099e4832 tag chunkpicker for update more often 2024-08-04 10:14:59 +02:00
9e2911b36c spread chosen chunks more thinly if small files
should help (a little) if many small files with distribution
2024-08-02 13:06:55 +02:00
6da1f9afca info fixes, should investigate more 2024-07-31 18:34:14 +02:00
8bd2c925a6 fix missized local have bitset 2024-07-25 14:57:27 +02:00
da406714ff adopt to new os and message file refactor 2024-07-24 17:55:31 +02:00
16cb755191 fix dangerous unchecked file stream read 2024-07-22 19:49:02 +02:00
54ace9d0b2 and use new backend code (partially transitioned to os backend) 2024-07-17 17:17:07 +02:00
e50e74e12f add os backend and add threaded hashing
still meh but nicer
2024-07-17 17:13:32 +02:00
f730844771 abstract aways file2rwmapped construction to lower visibility 2024-07-15 17:44:30 +02:00
3fcfbc11a4 reduce includes and some scope
hopefully fixes the windows obj being too large
2024-07-15 16:38:33 +02:00
1efae931d1 better receiving transfer cleanup (reduces log spam) 2024-07-15 14:56:52 +02:00
0b2fa40cb9 lower rat on join
TODO: its indescriminate, only announce to fresh peers
2024-07-15 14:38:53 +02:00
489556e322 fix front access to empty array
and increase send timeout assert
2024-07-15 11:48:16 +02:00
10756e13ce small fixes 2024-07-14 20:11:37 +02:00
74414d0999 re annouce with exponential back-off 2024-07-14 12:38:00 +02:00
bc5599a230 refactor sending transfers the same way as receiving 2024-07-13 13:52:43 +02:00
ca89e43a40 refactor extract chunk picker systems 2024-07-13 12:36:49 +02:00
dd04e6131a transfer stats 2024-07-13 11:46:33 +02:00
31253f5708 tweak ft max numbers and add stats comp 2024-07-12 15:04:49 +02:00
eff25cb10b meh 2024-07-12 14:42:13 +02:00
6e681aa3fd light cca refator and expose some cca values to the outside 2024-07-12 13:14:24 +02:00
1d97dbe73d rework bitset queue (worse) and send have_all instead (but better) 2024-07-10 15:47:33 +02:00
0e9b1b8877 add ext have all packet 2024-07-10 15:16:58 +02:00
f449cf623d fix bitset sizecheck and send out bitsets the first time someone
announces participation
2024-07-10 12:27:19 +02:00
bee7de3fb7 sequential strat now respects ReadHeadHint 2024-07-10 11:26:47 +02:00
822b979286 object download prio, not set anywhere yet, but the code is there now 2024-07-10 11:13:57 +02:00
ef91ec14fc explicit and better rng, remove junk and old code 2024-07-10 10:41:25 +02:00
699957f79a more consistenly tag cp update and lower cooldown just in case 2024-07-09 14:45:00 +02:00
02d58928f4 small refactors 2024-07-09 11:40:01 +02:00
60e6f91541 cleanup old workaround code 2024-07-09 11:04:19 +02:00
92373d34f7 work around missing contact events (better now)
fix missing ft event on reset (oops)
hard assert sending transfers can not time out higher level
2024-07-09 11:00:59 +02:00
e0b278b168 hot fix 2024-07-08 18:46:26 +02:00
e5681b4ad5 rework chunk picker update logic and participation logic
disable most spammy log
2024-07-08 18:12:47 +02:00
79e3070422 better random init 2024-07-07 17:13:30 +02:00
bf1fa64973 chunk picker strategies 2024-07-07 16:49:31 +02:00
11dee5870c fix round robin and reduce num empty spins to improve perf 2024-07-07 15:55:22 +02:00
fab3d42ee9 transfer time temporality buffer 2024-07-07 15:27:30 +02:00
269daaa764 work around missing contact events and properly clear on exit 2024-07-07 14:15:26 +02:00
ea945e6360 increase out number for 4 peers until proper sending per peer is implemented 2024-07-07 13:56:52 +02:00
b068819069 higher tickrate if open requests
(we expect an init soon and dont want to bounce around)
2024-07-07 13:21:59 +02:00
b64a4ae31c better bitset print 2024-07-07 13:07:57 +02:00
266cddf816 properly account for open requests when determining how much to request 2024-07-07 12:45:23 +02:00
eaaf798661 clear receiving transfers
TODO: actually keep around for 2*delay, so missing packets can still be retransmitted
but this fixes perf issues
2024-07-07 11:07:31 +02:00
d19fc6ba30 new chunk picker, basically working
still needs work on the sending side and more bug fixes
2024-07-03 12:11:20 +02:00
613b183592 fix have bit packing 2024-07-03 11:03:26 +02:00
3fd6183c21 combined id refactor 2024-07-02 16:09:59 +02:00
92b3d1a5fb more chunk picker prep 2024-07-02 15:52:25 +02:00
edf58b70f5 receiving count for peer 2024-07-02 14:54:08 +02:00
33560f8f8a receiving transfers refactor 2024-06-30 14:03:06 +02:00
3286a7228c more minor refactoring 2024-06-28 22:18:11 +02:00
b53e291c68 wip chunk picker (still unused) and a small refactor 2024-06-28 15:13:17 +02:00
27cade4dfe track remote have and bitset 2024-06-25 21:09:46 +02:00
0b4041db7e move bitset to util 2024-06-25 12:45:28 +02:00
e9e38db1d5 move self have_chunk to bitset 2024-06-25 12:08:17 +02:00
c8619561ec refactor: move (object/content) components out 2024-06-24 16:42:23 +02:00
1b630bc07f impl and test bitset util 2024-06-24 12:14:51 +02:00
ee2411b8e0 hack: send ft1_have every chunk we receive
produces unnecessary overhead, should be bundled
2024-06-23 15:12:31 +02:00
bc7417c1cd add send fn for new packets (parse and send still untested) 2024-06-23 12:55:23 +02:00
3827733f08 and remove the old code 2024-06-23 12:31:01 +02:00
5400c13f88 copy the remaining implemented send funcions over 2024-06-23 12:14:02 +02:00
8972386971 send out pc1 announces for ft infohash
will eliminate the guesswork in the future
2024-06-23 10:17:48 +02:00
b27107af4c start moving pkg sending to ngcext
wip, but working as far as its implemented
2024-06-23 10:14:03 +02:00
bcde244a3c handle pc1 announce and reduce chance to sample random peer
(will remove random sample sometime in the future)
2024-06-22 17:01:52 +02:00
e9f22bc9ae make ft1sha1 observe disconnects 2024-06-22 14:08:12 +02:00
c09f2e6f8f ngcext: parse ft1_have, ft1_bitset, pc1_announce 2024-06-22 12:48:54 +02:00
0eb30246a8 small refactor and print in flight packages when timing out 2024-05-31 17:03:22 +02:00
c52ac19285 print window on done 2024-05-31 15:36:18 +02:00
1231e792a7 lift reduction increase threshold 2024-05-27 18:07:19 +02:00
319e754aff rework time since reduction to only grow if cca is active, also start warm 2024-05-27 11:59:32 +02:00
a4201f4407 track timepoint of last update 2024-05-27 11:31:36 +02:00
57575330dd port to file2, other minor improvements 2024-05-27 11:20:37 +02:00
eb2a19d8f3 hack replace content with improper use of objectstore 2024-04-29 11:55:11 +02:00
dfcb5dee97 adopt receivedby rename 2024-04-20 15:12:05 +02:00
0d40d1abaa dont request from self 2024-04-15 11:48:17 +02:00
61b667a4aa reserve memory to reduce number of allocations in hotspots
especially on the sender side
2024-03-16 11:30:55 +01:00
c03282eae8 actually fix the timeout for slow connections 2024-03-09 18:06:49 +01:00
5fd1f2ab84 fix missing virtual destructor and scale tranfer timeout with concurency 2024-03-05 16:48:58 +01:00
bccd04316a tweak them numbers again 2024-02-04 20:04:36 +01:00
ccf66fb80c update hex conv 2024-01-13 22:34:42 +01:00
ea032244e7 remote comps 2024-01-12 18:55:41 +01:00
0df0760c06 failing to send is now also a congestion event (hacky and only the first time we send data) 2024-01-11 00:48:57 +01:00
f02b03da7c update to plugin 7 and refactor (should improve speed) 2024-01-07 17:23:06 +01:00
103f36f2d2 update to new ngc_events 2023-12-26 21:16:35 +01:00
ad918a3253 add random cap (1020-1220) and tighten cubic rate limit more 2023-12-15 15:31:32 +01:00
70cea0d219 small fixes 2023-12-13 19:38:55 +01:00
b0e2cab17a limit the amount it can send in a single tick (speed boost :D) 2023-12-13 17:56:56 +01:00
0a53a76eb3 maybe filesystem::path can help us 2023-12-13 16:09:34 +01:00
5995059777 better error log + fix broken accept on file creation error 2023-12-13 15:45:36 +01:00
abf2645099 fix include order 2023-11-12 19:58:57 +01:00
7c16c54649 only decrease window on congestion if prev max window was not yet reached yet 2023-10-16 19:51:56 +02:00
a80e74065c ignore requests for running transfers 2023-10-15 22:02:34 +02:00
77f21f01e9 extend the protocol to support larger data packets and set it to the new tox constants numbers 2023-10-11 03:00:03 +02:00
27fd9e688b unread/read status 2023-09-30 00:27:01 +02:00
f28e79dcbc fix missing include 2023-09-15 20:07:19 +02:00
7af5fda0a6 better filter and cubic fixes 2023-09-08 00:41:25 +02:00
f91780c602 filter simple packet drops by not counting the first 4 packets arriving out of order 2023-09-07 12:26:54 +02:00
1e6929c93b only cound a ce once 2023-09-02 13:28:32 +02:00
81a353570b more tweaking 2023-09-02 02:28:22 +02:00
070585ab3d remeber the first sending transfer that could not send any packets and start there next iterate 2023-09-01 23:20:03 +02:00
ba8befbb2d more fixes 2023-09-01 17:34:05 +02:00
a1a9bf886a make cubic and flow more resilient 2023-09-01 15:51:28 +02:00
46 changed files with 5326 additions and 1318 deletions

22
.gitignore vendored Normal file
View File

@ -0,0 +1,22 @@
.vs/
*.o
*.swp
~*
*~
.idea/
cmake-build-debug/
cmake-build-debugandtest/
cmake-build-release/
*.stackdump
*.coredump
compile_commands.json
/build*
.clangd
.cache
.DS_Store
.AppleDouble
.LSOverride
CMakeLists.txt.user*
CMakeCache.txt

View File

@ -43,17 +43,67 @@ target_link_libraries(solanaceae_ngcft1 PUBLIC
########################################
add_library(solanaceae_ngchs2
./solanaceae/ngc_hs2/ngc_hs2_send.hpp
./solanaceae/ngc_hs2/ngc_hs2_send.cpp
./solanaceae/ngc_hs2/ngc_hs2_recv.hpp
./solanaceae/ngc_hs2/ngc_hs2_recv.cpp
)
target_include_directories(solanaceae_ngchs2 PUBLIC .)
target_compile_features(solanaceae_ngchs2 PUBLIC cxx_std_17)
target_link_libraries(solanaceae_ngchs2 PUBLIC
solanaceae_ngcft1
solanaceae_tox_contacts
solanaceae_message3
solanaceae_object_store
)
########################################
add_library(solanaceae_sha1_ngcft1
# hacky deps
./solanaceae/ngc_ft1_sha1/mio.hpp
./solanaceae/ngc_ft1_sha1/file_rw_mapped.hpp
./solanaceae/ngc_ft1_sha1/file_constructor.hpp
./solanaceae/ngc_ft1_sha1/file_constructor.cpp
./solanaceae/ngc_ft1_sha1/backends/sha1_mapped_filesystem.hpp
./solanaceae/ngc_ft1_sha1/backends/sha1_mapped_filesystem.cpp
./solanaceae/ngc_ft1_sha1/hash_utils.hpp
./solanaceae/ngc_ft1_sha1/hash_utils.cpp
./solanaceae/ngc_ft1_sha1/util.hpp
./solanaceae/ngc_ft1_sha1/ft1_sha1_info.hpp
./solanaceae/ngc_ft1_sha1/ft1_sha1_info.cpp
./solanaceae/ngc_ft1_sha1/components.hpp
./solanaceae/ngc_ft1_sha1/components.cpp
./solanaceae/ngc_ft1_sha1/contact_components.hpp
./solanaceae/ngc_ft1_sha1/chunk_picker.hpp
./solanaceae/ngc_ft1_sha1/chunk_picker.cpp
./solanaceae/ngc_ft1_sha1/participation.hpp
./solanaceae/ngc_ft1_sha1/participation.cpp
./solanaceae/ngc_ft1_sha1/re_announce_systems.hpp
./solanaceae/ngc_ft1_sha1/re_announce_systems.cpp
./solanaceae/ngc_ft1_sha1/chunk_picker_systems.hpp
./solanaceae/ngc_ft1_sha1/chunk_picker_systems.cpp
./solanaceae/ngc_ft1_sha1/transfer_stats_systems.hpp
./solanaceae/ngc_ft1_sha1/transfer_stats_systems.cpp
./solanaceae/ngc_ft1_sha1/sending_transfers.hpp
./solanaceae/ngc_ft1_sha1/sending_transfers.cpp
./solanaceae/ngc_ft1_sha1/receiving_transfers.hpp
./solanaceae/ngc_ft1_sha1/receiving_transfers.cpp
./solanaceae/ngc_ft1_sha1/sha1_ngcft1.hpp
./solanaceae/ngc_ft1_sha1/sha1_ngcft1.cpp
)
@ -65,5 +115,26 @@ target_link_libraries(solanaceae_sha1_ngcft1 PUBLIC
sha1::sha1
solanaceae_tox_contacts
solanaceae_message3
solanaceae_object_store
solanaceae_file2
)
########################################
option(SOLANACEAE_NGCFT1_SHA1_BUILD_TESTING "Build the solanaceae_ngcft1_sha1 tests" OFF)
message("II SOLANACEAE_NGCFT1_SHA1_BUILD_TESTING " ${SOLANACEAE_NGCFT1_SHA1_BUILD_TESTING})
# TODO: proper options n shit
if (SOLANACEAE_NGCFT1_SHA1_BUILD_TESTING)
include(CTest)
#add_executable(bitset_tests
# ./solanaceae/ngc_ft1_sha1/bitset_tests.cpp
#)
#target_link_libraries(bitset_tests PUBLIC
# solanaceae_sha1_ngcft1
#)
endif()

View File

@ -1,10 +1,13 @@
#include "./ngcext.hpp"
#include <iostream>
#include <cassert>
NGCEXTEventProvider::NGCEXTEventProvider(ToxEventProviderI& tep) : _tep(tep) {
_tep.subscribe(this, Tox_Event::TOX_EVENT_GROUP_CUSTOM_PACKET);
_tep.subscribe(this, Tox_Event::TOX_EVENT_GROUP_CUSTOM_PRIVATE_PACKET);
NGCEXTEventProvider::NGCEXTEventProvider(ToxI& t, ToxEventProviderI& tep) : _t(t), _tep(tep), _tep_sr(_tep.newSubRef(this)) {
_tep_sr
.subscribe(Tox_Event_Type::TOX_EVENT_GROUP_CUSTOM_PACKET)
.subscribe(Tox_Event_Type::TOX_EVENT_GROUP_CUSTOM_PRIVATE_PACKET)
;
}
#define _DATA_HAVE(x, error) if ((data_size - curser) < (x)) { error; }
@ -77,7 +80,7 @@ bool NGCEXTEventProvider::parse_ft1_init(
e.file_size = 0u;
_DATA_HAVE(sizeof(e.file_size), std::cerr << "NGCEXT: packet too small, missing file_size\n"; return false)
for (size_t i = 0; i < sizeof(e.file_size); i++, curser++) {
e.file_size |= size_t(data[curser]) << (i*8);
e.file_size |= uint64_t(data[curser]) << (i*8);
}
// - 1 byte (temporary_file_tf_id)
@ -112,6 +115,85 @@ bool NGCEXTEventProvider::parse_ft1_init_ack(
_DATA_HAVE(sizeof(e.transfer_id), std::cerr << "NGCEXT: packet too small, missing transfer_id\n"; return false)
e.transfer_id = data[curser++];
e.max_lossy_data_size = 500-4; // -4 and 500 are hardcoded
return dispatch(
NGCEXT_Event::FT1_INIT_ACK,
e
);
}
bool NGCEXTEventProvider::parse_ft1_init_ack_v2(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
) {
if (!_private) {
std::cerr << "NGCEXT: ft1_init_ack_v2 cant be public\n";
return false;
}
Events::NGCEXT_ft1_init_ack e;
e.group_number = group_number;
e.peer_number = peer_number;
size_t curser = 0;
// - 1 byte (temporary_file_tf_id)
_DATA_HAVE(sizeof(e.transfer_id), std::cerr << "NGCEXT: packet too small, missing transfer_id\n"; return false)
e.transfer_id = data[curser++];
// - 2 byte (max_lossy_data_size)
if ((data_size - curser) >= sizeof(e.max_lossy_data_size)) {
e.max_lossy_data_size = 0;
for (size_t i = 0; i < sizeof(e.max_lossy_data_size); i++, curser++) {
e.max_lossy_data_size |= uint16_t(data[curser]) << (i*8);
}
} else {
e.max_lossy_data_size = 500-4; // default
}
return dispatch(
NGCEXT_Event::FT1_INIT_ACK,
e
);
}
bool NGCEXTEventProvider::parse_ft1_init_ack_v3(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
) {
if (!_private) {
std::cerr << "NGCEXT: ft1_init_ack_v3 cant be public\n";
return false;
}
Events::NGCEXT_ft1_init_ack e;
e.group_number = group_number;
e.peer_number = peer_number;
size_t curser = 0;
// - 1 byte (temporary_file_tf_id)
_DATA_HAVE(sizeof(e.transfer_id), std::cerr << "NGCEXT: packet too small, missing transfer_id\n"; return false)
e.transfer_id = data[curser++];
// - 2 byte (max_lossy_data_size)
if ((data_size - curser) >= sizeof(e.max_lossy_data_size)) {
e.max_lossy_data_size = 0;
for (size_t i = 0; i < sizeof(e.max_lossy_data_size); i++, curser++) {
e.max_lossy_data_size |= uint16_t(data[curser]) << (i*8);
}
} else {
e.max_lossy_data_size = 500-4; // default
}
// - 1 byte (feature_flags)
if ((data_size - curser) >= sizeof(e.feature_flags)) {
e.feature_flags = data[curser++];
} else {
e.feature_flags = 0x00; // default
}
return dispatch(
NGCEXT_Event::FT1_INIT_ACK,
e
@ -224,6 +306,209 @@ bool NGCEXTEventProvider::parse_ft1_message(
);
}
bool NGCEXTEventProvider::parse_ft1_have(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
) {
if (!_private) {
std::cerr << "NGCEXT: ft1_have cant be public\n";
return false;
}
Events::NGCEXT_ft1_have e;
e.group_number = group_number;
e.peer_number = peer_number;
size_t curser = 0;
// - 4 byte (file_kind)
e.file_kind = 0u;
_DATA_HAVE(sizeof(e.file_kind), std::cerr << "NGCEXT: packet too small, missing file_kind\n"; return false)
for (size_t i = 0; i < sizeof(e.file_kind); i++, curser++) {
e.file_kind |= uint32_t(data[curser]) << (i*8);
}
// - X bytes (file_kind dependent id, differnt sizes)
uint16_t file_id_size = 0u;
_DATA_HAVE(sizeof(file_id_size), std::cerr << "NGCEXT: packet too small, missing file_id_size\n"; return false)
for (size_t i = 0; i < sizeof(file_id_size); i++, curser++) {
file_id_size |= uint32_t(data[curser]) << (i*8);
}
_DATA_HAVE(file_id_size, std::cerr << "NGCEXT: packet too small, missing file_id, or file_id_size too large(" << data_size-curser << ")\n"; return false)
e.file_id = {data+curser, data+curser+file_id_size};
curser += file_id_size;
// - array [
// - 4 bytes (chunk index)
// - ]
while (curser < data_size) {
_DATA_HAVE(sizeof(uint32_t), std::cerr << "NGCEXT: packet too small, broken chunk index\n"; return false)
uint32_t chunk_index = 0u;
for (size_t i = 0; i < sizeof(chunk_index); i++, curser++) {
chunk_index |= uint32_t(data[curser]) << (i*8);
}
e.chunks.push_back(chunk_index);
}
return dispatch(
NGCEXT_Event::FT1_HAVE,
e
);
}
bool NGCEXTEventProvider::parse_ft1_bitset(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
) {
if (!_private) {
std::cerr << "NGCEXT: ft1_bitset cant be public\n";
return false;
}
Events::NGCEXT_ft1_bitset e;
e.group_number = group_number;
e.peer_number = peer_number;
size_t curser = 0;
// - 4 byte (file_kind)
e.file_kind = 0u;
_DATA_HAVE(sizeof(e.file_kind), std::cerr << "NGCEXT: packet too small, missing file_kind\n"; return false)
for (size_t i = 0; i < sizeof(e.file_kind); i++, curser++) {
e.file_kind |= uint32_t(data[curser]) << (i*8);
}
// - X bytes (file_kind dependent id, differnt sizes)
uint16_t file_id_size = 0u;
_DATA_HAVE(sizeof(file_id_size), std::cerr << "NGCEXT: packet too small, missing file_id_size\n"; return false)
for (size_t i = 0; i < sizeof(file_id_size); i++, curser++) {
file_id_size |= uint32_t(data[curser]) << (i*8);
}
_DATA_HAVE(file_id_size, std::cerr << "NGCEXT: packet too small, missing file_id, or file_id_size too large (" << data_size-curser << ")\n"; return false)
e.file_id = {data+curser, data+curser+file_id_size};
curser += file_id_size;
e.start_chunk = 0u;
_DATA_HAVE(sizeof(e.start_chunk), std::cerr << "NGCEXT: packet too small, missing start_chunk\n"; return false)
for (size_t i = 0; i < sizeof(e.start_chunk); i++, curser++) {
e.start_chunk |= uint32_t(data[curser]) << (i*8);
}
// - X bytes
// - array [
// - 1 bit (have chunk)
// - ] (filled up with zero)
// high to low?
// simply rest of file packet
e.chunk_bitset = {data+curser, data+curser+(data_size-curser)};
return dispatch(
NGCEXT_Event::FT1_BITSET,
e
);
}
bool NGCEXTEventProvider::parse_ft1_have_all(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
) {
// can be public
// TODO: warn on public?
Events::NGCEXT_ft1_have_all e;
e.group_number = group_number;
e.peer_number = peer_number;
size_t curser = 0;
// - 4 byte (file_kind)
e.file_kind = 0u;
_DATA_HAVE(sizeof(e.file_kind), std::cerr << "NGCEXT: packet too small, missing file_kind\n"; return false)
for (size_t i = 0; i < sizeof(e.file_kind); i++, curser++) {
e.file_kind |= uint32_t(data[curser]) << (i*8);
}
_DATA_HAVE(1, std::cerr << "NGCEXT: packet too small, missing file_id\n"; return false)
// - X bytes (file_id, differnt sizes)
e.file_id = {data+curser, data+curser+(data_size-curser)};
return dispatch(
NGCEXT_Event::FT1_HAVE_ALL,
e
);
}
bool NGCEXTEventProvider::parse_ft1_init2(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
) {
if (!_private) {
std::cerr << "NGCEXT: ft1_init2 cant be public\n";
return false;
}
Events::NGCEXT_ft1_init2 e;
e.group_number = group_number;
e.peer_number = peer_number;
size_t curser = 0;
// - 4 byte (file_kind)
e.file_kind = 0u;
_DATA_HAVE(sizeof(e.file_kind), std::cerr << "NGCEXT: packet too small, missing file_kind\n"; return false)
for (size_t i = 0; i < sizeof(e.file_kind); i++, curser++) {
e.file_kind |= uint32_t(data[curser]) << (i*8);
}
// - 8 bytes (data size)
e.file_size = 0u;
_DATA_HAVE(sizeof(e.file_size), std::cerr << "NGCEXT: packet too small, missing file_size\n"; return false)
for (size_t i = 0; i < sizeof(e.file_size); i++, curser++) {
e.file_size |= uint64_t(data[curser]) << (i*8);
}
// - 1 byte (temporary_file_tf_id)
_DATA_HAVE(sizeof(e.transfer_id), std::cerr << "NGCEXT: packet too small, missing transfer_id\n"; return false)
e.transfer_id = data[curser++];
// - 1 byte feature flags
_DATA_HAVE(sizeof(e.feature_flags), std::cerr << "NGCEXT: packet too small, missing feature_flags\n"; return false)
e.feature_flags = data[curser++];
// - X bytes (file_kind dependent id, differnt sizes)
e.file_id = {data+curser, data+curser+(data_size-curser)};
return dispatch(
NGCEXT_Event::FT1_INIT2,
e
);
}
bool NGCEXTEventProvider::parse_pc1_announce(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
) {
// can be public
Events::NGCEXT_pc1_announce e;
e.group_number = group_number;
e.peer_number = peer_number;
size_t curser = 0;
// - X bytes (id, differnt sizes)
e.id = {data+curser, data+curser+(data_size-curser)};
return dispatch(
NGCEXT_Event::PC1_ANNOUNCE,
e
);
}
bool NGCEXTEventProvider::handlePacket(
const uint32_t group_number,
const uint32_t peer_number,
@ -247,13 +532,25 @@ bool NGCEXTEventProvider::handlePacket(
case NGCEXT_Event::FT1_INIT:
return parse_ft1_init(group_number, peer_number, data+1, data_size-1, _private);
case NGCEXT_Event::FT1_INIT_ACK:
return parse_ft1_init_ack(group_number, peer_number, data+1, data_size-1, _private);
//return parse_ft1_init_ack(group_number, peer_number, data+1, data_size-1, _private);
//return parse_ft1_init_ack_v2(group_number, peer_number, data+1, data_size-1, _private);
return parse_ft1_init_ack_v3(group_number, peer_number, data+1, data_size-1, _private);
case NGCEXT_Event::FT1_DATA:
return parse_ft1_data(group_number, peer_number, data+1, data_size-1, _private);
case NGCEXT_Event::FT1_DATA_ACK:
return parse_ft1_data_ack(group_number, peer_number, data+1, data_size-1, _private);
case NGCEXT_Event::FT1_MESSAGE:
return parse_ft1_message(group_number, peer_number, data+1, data_size-1, _private);
case NGCEXT_Event::FT1_HAVE:
return parse_ft1_have(group_number, peer_number, data+1, data_size-1, _private);
case NGCEXT_Event::FT1_BITSET:
return parse_ft1_bitset(group_number, peer_number, data+1, data_size-1, _private);
case NGCEXT_Event::FT1_HAVE_ALL:
return parse_ft1_have_all(group_number, peer_number, data+1, data_size-1, _private);
case NGCEXT_Event::FT1_INIT2:
return parse_ft1_init2(group_number, peer_number, data+1, data_size-1, _private);
case NGCEXT_Event::PC1_ANNOUNCE:
return parse_pc1_announce(group_number, peer_number, data+1, data_size-1, _private);
default:
return false;
}
@ -261,6 +558,309 @@ bool NGCEXTEventProvider::handlePacket(
return false;
}
bool NGCEXTEventProvider::send_ft1_request(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
) {
// - 1 byte packet id
// - 4 byte file_kind
// - X bytes file_id
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_REQUEST));
for (size_t i = 0; i < sizeof(file_kind); i++) {
pkg.push_back((file_kind>>(i*8)) & 0xff);
}
for (size_t i = 0; i < file_id_size; i++) {
pkg.push_back(file_id[i]);
}
// lossless
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCEXTEventProvider::send_ft1_init(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
uint64_t file_size,
uint8_t transfer_id,
const uint8_t* file_id, size_t file_id_size
) {
// - 1 byte packet id
// - 4 byte (file_kind)
// - 8 bytes (data size)
// - 1 byte (temporary_file_tf_id, for this peer only, technically just a prefix to distinguish between simultainious fts)
// - X bytes (file_kind dependent id, differnt sizes)
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_INIT));
for (size_t i = 0; i < sizeof(file_kind); i++) {
pkg.push_back((file_kind>>(i*8)) & 0xff);
}
for (size_t i = 0; i < sizeof(file_size); i++) {
pkg.push_back((file_size>>(i*8)) & 0xff);
}
pkg.push_back(transfer_id);
for (size_t i = 0; i < file_id_size; i++) {
pkg.push_back(file_id[i]);
}
// lossless
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCEXTEventProvider::send_ft1_init_ack(
uint32_t group_number, uint32_t peer_number,
uint8_t transfer_id
) {
// - 1 byte packet id
// - 1 byte transfer_id
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_INIT_ACK));
pkg.push_back(transfer_id);
// - 2 bytes max_lossy_data_size
const uint16_t max_lossy_data_size = _t.toxGroupMaxCustomLossyPacketLength() - 4;
for (size_t i = 0; i < sizeof(uint16_t); i++) {
pkg.push_back((max_lossy_data_size>>(i*8)) & 0xff);
}
// lossless
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCEXTEventProvider::send_ft1_data(
uint32_t group_number, uint32_t peer_number,
uint8_t transfer_id,
uint16_t sequence_id,
const uint8_t* data, size_t data_size
) {
assert(data_size > 0);
// TODO
// check header_size+data_size <= max pkg size
std::vector<uint8_t> pkg;
pkg.reserve(2048); // saves a ton of allocations
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_DATA));
pkg.push_back(transfer_id);
pkg.push_back(sequence_id & 0xff);
pkg.push_back((sequence_id >> (1*8)) & 0xff);
// TODO: optimize
for (size_t i = 0; i < data_size; i++) {
pkg.push_back(data[i]);
}
// lossy
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, false, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCEXTEventProvider::send_ft1_data_ack(
uint32_t group_number, uint32_t peer_number,
uint8_t transfer_id,
const uint16_t* seq_ids, size_t seq_ids_size
) {
std::vector<uint8_t> pkg;
pkg.reserve(1+1+2*32); // 32acks in a single pkg should be unlikely
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_DATA_ACK));
pkg.push_back(transfer_id);
// TODO: optimize
for (size_t i = 0; i < seq_ids_size; i++) {
pkg.push_back(seq_ids[i] & 0xff);
pkg.push_back((seq_ids[i] >> (1*8)) & 0xff);
}
// lossy
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, false, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCEXTEventProvider::send_all_ft1_message(
uint32_t group_number,
uint32_t message_id,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
) {
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_MESSAGE));
for (size_t i = 0; i < sizeof(message_id); i++) {
pkg.push_back((message_id>>(i*8)) & 0xff);
}
for (size_t i = 0; i < sizeof(file_kind); i++) {
pkg.push_back((file_kind>>(i*8)) & 0xff);
}
for (size_t i = 0; i < file_id_size; i++) {
pkg.push_back(file_id[i]);
}
// lossless
return _t.toxGroupSendCustomPacket(group_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PACKET_OK;
}
bool NGCEXTEventProvider::send_ft1_have(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size,
const uint32_t* chunks_data, size_t chunks_size
) {
// 16bit file id size
assert(file_id_size <= 0xffff);
if (file_id_size > 0xffff) {
return false;
}
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_HAVE));
for (size_t i = 0; i < sizeof(file_kind); i++) {
pkg.push_back((file_kind>>(i*8)) & 0xff);
}
// file id not last in packet, needs explicit size
const uint16_t file_id_size_cast = file_id_size;
for (size_t i = 0; i < sizeof(file_id_size_cast); i++) {
pkg.push_back((file_id_size_cast>>(i*8)) & 0xff);
}
for (size_t i = 0; i < file_id_size; i++) {
pkg.push_back(file_id[i]);
}
// rest is chunks
for (size_t c_i = 0; c_i < chunks_size; c_i++) {
for (size_t i = 0; i < sizeof(chunks_data[c_i]); i++) {
pkg.push_back((chunks_data[c_i]>>(i*8)) & 0xff);
}
}
// lossless
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCEXTEventProvider::send_ft1_bitset(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size,
uint32_t start_chunk,
const uint8_t* bitset_data, size_t bitset_size // size is bytes
) {
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_BITSET));
for (size_t i = 0; i < sizeof(file_kind); i++) {
pkg.push_back((file_kind>>(i*8)) & 0xff);
}
// file id not last in packet, needs explicit size
const uint16_t file_id_size_cast = file_id_size;
for (size_t i = 0; i < sizeof(file_id_size_cast); i++) {
pkg.push_back((file_id_size_cast>>(i*8)) & 0xff);
}
for (size_t i = 0; i < file_id_size; i++) {
pkg.push_back(file_id[i]);
}
for (size_t i = 0; i < sizeof(start_chunk); i++) {
pkg.push_back((start_chunk>>(i*8)) & 0xff);
}
for (size_t i = 0; i < bitset_size; i++) {
pkg.push_back(bitset_data[i]);
}
// lossless
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCEXTEventProvider::send_ft1_have_all(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
) {
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_HAVE_ALL));
for (size_t i = 0; i < sizeof(file_kind); i++) {
pkg.push_back((file_kind>>(i*8)) & 0xff);
}
for (size_t i = 0; i < file_id_size; i++) {
pkg.push_back(file_id[i]);
}
// lossless
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCEXTEventProvider::send_ft1_init2(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
uint64_t file_size,
uint8_t transfer_id,
uint8_t feature_flags,
const uint8_t* file_id, size_t file_id_size
) {
// - 1 byte packet id
// - 4 byte (file_kind)
// - 8 bytes (data size)
// - 1 byte (temporary_file_tf_id, for this peer only, technically just a prefix to distinguish between simultainious fts)
// - 1 byte (feature_flags)
// - X bytes (file_kind dependent id, differnt sizes)
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_INIT2));
for (size_t i = 0; i < sizeof(file_kind); i++) {
pkg.push_back((file_kind>>(i*8)) & 0xff);
}
for (size_t i = 0; i < sizeof(file_size); i++) {
pkg.push_back((file_size>>(i*8)) & 0xff);
}
pkg.push_back(transfer_id);
pkg.push_back(feature_flags);
for (size_t i = 0; i < file_id_size; i++) {
pkg.push_back(file_id[i]);
}
// lossless
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
static std::vector<uint8_t> build_pc1_announce(const uint8_t* id_data, size_t id_size) {
// - 1 byte packet id
// - X bytes (id, differnt sizes)
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::PC1_ANNOUNCE));
for (size_t i = 0; i < id_size; i++) {
pkg.push_back(id_data[i]);
}
return pkg;
}
bool NGCEXTEventProvider::send_pc1_announce(
uint32_t group_number, uint32_t peer_number,
const uint8_t* id_data, size_t id_size
) {
auto pkg = build_pc1_announce(id_data, id_size);
std::cout << "NEEP: sending PC1_ANNOUNCE s:" << pkg.size() - sizeof(NGCEXT_Event::PC1_ANNOUNCE) << "\n";
// lossless?
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCEXTEventProvider::send_all_pc1_announce(
uint32_t group_number,
const uint8_t* id_data, size_t id_size
) {
auto pkg = build_pc1_announce(id_data, id_size);
std::cout << "NEEP: sending all PC1_ANNOUNCE s:" << pkg.size() - sizeof(NGCEXT_Event::PC1_ANNOUNCE) << "\n";
// lossless?
return _t.toxGroupSendCustomPacket(group_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PACKET_OK;
}
bool NGCEXTEventProvider::onToxEvent(const Tox_Event_Group_Custom_Packet* e) {
const auto group_number = tox_event_group_custom_packet_get_group_number(e);
const auto peer_number = tox_event_group_custom_packet_get_peer_id(e);

View File

@ -3,11 +3,11 @@
// solanaceae port of tox_ngc_ext
#include <solanaceae/toxcore/tox_event_interface.hpp>
#include <solanaceae/toxcore/tox_interface.hpp>
#include <solanaceae/util/event_provider.hpp>
#include <solanaceae/toxcore/tox_key.hpp>
#include <array>
#include <vector>
namespace Events {
@ -30,6 +30,7 @@ namespace Events {
uint32_t peer_number;
// respond to a request with 0 or more message ids, sorted by newest first
// - peer_key bytes (the msg_ids are from)
ToxKey peer_key;
@ -47,6 +48,7 @@ namespace Events {
uint32_t peer_number;
// request the other side to initiate a FT
// - 4 byte (file_kind)
uint32_t file_kind;
@ -54,11 +56,13 @@ namespace Events {
std::vector<uint8_t> file_id;
};
// DEPRECATED: use FT1_INIT2 instead
struct NGCEXT_ft1_init {
uint32_t group_number;
uint32_t peer_number;
// tell the other side you want to start a FT
// - 4 byte (file_kind)
uint32_t file_kind;
@ -70,8 +74,6 @@ namespace Events {
// - X bytes (file_kind dependent id, differnt sizes)
std::vector<uint8_t> file_id;
// TODO: max supported lossy packet size
};
struct NGCEXT_ft1_init_ack {
@ -81,7 +83,13 @@ namespace Events {
// - 1 byte (transfer_id)
uint8_t transfer_id;
// TODO: max supported lossy packet size
// - 2 byte (self_max_lossy_data_size)
uint16_t max_lossy_data_size;
// - 1 byte feature flags
// - 0x01 advertised zstd compression
// - 0x02
uint8_t feature_flags;
};
struct NGCEXT_ft1_data {
@ -89,6 +97,7 @@ namespace Events {
uint32_t peer_number;
// data fragment
// - 1 byte (temporary_file_tf_id)
uint8_t transfer_id;
@ -120,7 +129,6 @@ namespace Events {
// - 4 byte (message_id)
uint32_t message_id;
// request the other side to initiate a FT
// - 4 byte (file_kind)
uint32_t file_kind;
@ -128,6 +136,84 @@ namespace Events {
std::vector<uint8_t> file_id;
};
struct NGCEXT_ft1_have {
uint32_t group_number;
uint32_t peer_number;
// - 4 byte (file_kind)
uint32_t file_kind;
// - X bytes (file_kind dependent id, differnt sizes)
std::vector<uint8_t> file_id;
// - array [
// - 4 bytes (chunk index)
// - ]
std::vector<uint32_t> chunks;
};
struct NGCEXT_ft1_bitset {
uint32_t group_number;
uint32_t peer_number;
// - 4 byte (file_kind)
uint32_t file_kind;
// - X bytes (file_kind dependent id, differnt sizes)
std::vector<uint8_t> file_id;
uint32_t start_chunk;
// - array [
// - 1 bit (have chunk)
// - ] (filled up with zero)
// high to low?
std::vector<uint8_t> chunk_bitset;
};
struct NGCEXT_ft1_have_all {
uint32_t group_number;
uint32_t peer_number;
// - 4 byte (file_kind)
uint32_t file_kind;
// - X bytes (file_kind dependent id, differnt sizes)
std::vector<uint8_t> file_id;
};
struct NGCEXT_ft1_init2 {
uint32_t group_number;
uint32_t peer_number;
// tell the other side you want to start a FT
// - 4 byte (file_kind)
uint32_t file_kind;
// - 8 bytes (data size)
uint64_t file_size;
// - 1 byte (temporary_file_tf_id, for this peer only, technically just a prefix to distinguish between simultainious fts)
uint8_t transfer_id;
// - 1 byte feature flags
// - 0x01 advertise zstd compression
// - 0x02
uint8_t feature_flags;
// - X bytes (file_kind dependent id, differnt sizes)
std::vector<uint8_t> file_id;
};
struct NGCEXT_pc1_announce {
uint32_t group_number;
uint32_t peer_number;
// - X bytes (id, differnt sizes)
std::vector<uint8_t> id;
};
} // Events
enum class NGCEXT_Event : uint8_t {
@ -154,6 +240,7 @@ enum class NGCEXT_Event : uint8_t {
// tell the other side you want to start a FT
// TODO: might use id layer instead. with it, it would look similar to friends_ft
// DEPRECATED: use FT1_INIT2 instead
// - 4 byte (file_kind)
// - 8 bytes (data size, can be 0 if unknown, BUT files have to be atleast 1 byte)
// - 1 byte (temporary_file_tf_id, for this peer only, technically just a prefix to distinguish between simultainious fts)
@ -163,6 +250,8 @@ enum class NGCEXT_Event : uint8_t {
// acknowlage init (like an accept)
// like tox ft control continue
// - 1 byte (transfer_id)
// - 2 byte (self_max_lossy_data_size) (optimal since v2)
// - 1 byte feature flags (optimal since v3, requires prev)
FT1_INIT_ACK,
// TODO: init deny, speed up non acceptance
@ -186,11 +275,63 @@ enum class NGCEXT_Event : uint8_t {
// send file as message
// basically the opposite of request
// contains file_kind and file_id (and timestamp?)
// - 4 byte (message_id)
// - 4 byte (file_kind)
// - 4 bytes (message_id)
// - 4 bytes (file_kind)
// - X bytes (file_kind dependent id, differnt sizes)
FT1_MESSAGE,
// announce you have specified chunks, for given info
// this is info/chunk specific
// bundle these together to reduce overhead (like maybe every 16, max 1min)
// - 4 bytes (file_kind)
// - X bytes (file_kind dependent id, differnt sizes)
// - array [
// - 4 bytes (chunk index)
// - ]
FT1_HAVE,
// tell the other peer which chunks, for a given info you have
// compressed down to a bitset (in parts)
// supposed to only be sent once on participation announcement, when mutual interest
// it is always assumed by the other side, that you dont have the chunk, until told otherwise,
// so you can be smart about what you send.
// - 4 bytes (file_kind)
// - X bytes (file_kind dependent id, differnt sizes)
// - 4 bytes (first chunk index in bitset)
// - array [
// - 1 bit (have chunk)
// - ] (filled up with zero)
FT1_BITSET,
// announce you have all chunks, for given info
// prefer over have and bitset
// - 4 bytes (file_kind)
// - X bytes (file_kind dependent id, differnt sizes)
FT1_HAVE_ALL,
// tell the other side you want to start a FT
// update: added feature flags (compression)
// - 4 byte (file_kind)
// - 8 bytes (data size, can be 0 if unknown, BUT files have to be atleast 1 byte)
// - 1 byte (temporary_file_tf_id, for this peer only, technically just a prefix to distinguish between simultainious fts)
// - 1 byte feature flags
// - X bytes (file_kind dependent id, differnt sizes)
FT1_INIT2,
// TODO: FT1_IDONTHAVE, tell a peer you no longer have said chunk
// TODO: FT1_REJECT, tell a peer you wont fulfil the request
// TODO: FT1_CANCEL, tell a peer you stop the transfer
// tell another peer that you are participating in X
// you can reply with PC1_ANNOUNCE, to let the other side know, you too are participating in X
// you should NOT announce often, since this hits peers that not participate
// ft1 uses fk+id
// - x bytes (id, different sizes)
PC1_ANNOUNCE = 0x80 | 32u,
// uses sub splitting
P2PRNG = 0x80 | 38u,
MAX
};
@ -204,15 +345,22 @@ struct NGCEXTEventI {
virtual bool onEvent(const Events::NGCEXT_ft1_data&) { return false; }
virtual bool onEvent(const Events::NGCEXT_ft1_data_ack&) { return false; }
virtual bool onEvent(const Events::NGCEXT_ft1_message&) { return false; }
virtual bool onEvent(const Events::NGCEXT_ft1_have&) { return false; }
virtual bool onEvent(const Events::NGCEXT_ft1_bitset&) { return false; }
virtual bool onEvent(const Events::NGCEXT_ft1_have_all&) { return false; }
virtual bool onEvent(const Events::NGCEXT_ft1_init2&) { return false; }
virtual bool onEvent(const Events::NGCEXT_pc1_announce&) { return false; }
};
using NGCEXTEventProviderI = EventProviderI<NGCEXTEventI>;
class NGCEXTEventProvider : public ToxEventI, public NGCEXTEventProviderI {
ToxI& _t;
ToxEventProviderI& _tep;
ToxEventProviderI::SubscriptionReference _tep_sr;
public:
NGCEXTEventProvider(ToxEventProviderI& tep/*, ToxI& t*/);
NGCEXTEventProvider(ToxI& t, ToxEventProviderI& tep);
protected:
bool parse_hs1_request_last_ids(
@ -245,6 +393,18 @@ class NGCEXTEventProvider : public ToxEventI, public NGCEXTEventProviderI {
bool _private
);
bool parse_ft1_init_ack_v2(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
);
bool parse_ft1_init_ack_v3(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
);
bool parse_ft1_data(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
@ -263,6 +423,36 @@ class NGCEXTEventProvider : public ToxEventI, public NGCEXTEventProviderI {
bool _private
);
bool parse_ft1_have(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
);
bool parse_ft1_bitset(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
);
bool parse_ft1_have_all(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
);
bool parse_ft1_init2(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
);
bool parse_pc1_announce(
uint32_t group_number, uint32_t peer_number,
const uint8_t* data, size_t data_size,
bool _private
);
bool handlePacket(
const uint32_t group_number,
const uint32_t peer_number,
@ -271,6 +461,87 @@ class NGCEXTEventProvider : public ToxEventI, public NGCEXTEventProviderI {
const bool _private
);
public: // send api
bool send_ft1_request(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
);
bool send_ft1_init(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
uint64_t file_size,
uint8_t transfer_id,
const uint8_t* file_id, size_t file_id_size
);
bool send_ft1_init_ack(
uint32_t group_number, uint32_t peer_number,
uint8_t transfer_id
);
bool send_ft1_data(
uint32_t group_number, uint32_t peer_number,
uint8_t transfer_id,
uint16_t sequence_id,
const uint8_t* data, size_t data_size
);
bool send_ft1_data_ack(
uint32_t group_number, uint32_t peer_number,
uint8_t transfer_id,
const uint16_t* seq_ids, size_t seq_ids_size
);
// TODO: add private version
bool send_all_ft1_message(
uint32_t group_number,
uint32_t message_id,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
);
bool send_ft1_have(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size,
const uint32_t* chunks_data, size_t chunks_size
);
bool send_ft1_bitset(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size,
uint32_t start_chunk,
const uint8_t* bitset_data, size_t bitset_size // size is bytes
);
bool send_ft1_have_all(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
);
bool send_ft1_init2(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
uint64_t file_size,
uint8_t transfer_id,
uint8_t feature_flags,
const uint8_t* file_id, size_t file_id_size
);
bool send_pc1_announce(
uint32_t group_number, uint32_t peer_number,
const uint8_t* id_data, size_t id_size
);
bool send_all_pc1_announce(
uint32_t group_number,
const uint8_t* id_data, size_t id_size
);
protected:
bool onToxEvent(const Tox_Event_Group_Custom_Packet* e) override;
bool onToxEvent(const Tox_Event_Group_Custom_Private_Packet* e) override;

View File

@ -5,15 +5,6 @@
#include <cstddef>
// TODO: refactor, more state tracking in ccai and seperate into flow and congestion algos
inline bool isSkipSeqID(const std::pair<uint8_t, uint16_t>& a, const std::pair<uint8_t, uint16_t>& b) {
// this is not perfect, would need more ft id based history
if (a.first != b.first) {
return false; // we dont know
} else {
return a.second+1 != b.second;
}
}
struct CCAI {
public: // config
using SeqIDType = std::pair<uint8_t, uint16_t>; // tf_id, seq_id
@ -38,22 +29,38 @@ struct CCAI {
//static_assert(maximum_segment_size == 574); // mesured in wireshark
// flow control
float max_byterate_allowed {10*1024*1024}; // 10MiB/s
//float max_byterate_allowed {100.f*1024*1024}; // 100MiB/s
float max_byterate_allowed {10.f*1024*1024}; // 10MiB/s
//float max_byterate_allowed {1.f*1024*1024}; // 1MiB/s
//float max_byterate_allowed {0.6f*1024*1024}; // 600KiB/s
//float max_byterate_allowed {0.5f*1024*1024}; // 500KiB/s
//float max_byterate_allowed {0.15f*1024*1024}; // 150KiB/s
//float max_byterate_allowed {0.05f*1024*1024}; // 50KiB/s
public: // api
CCAI(size_t maximum_segment_data_size) : MAXIMUM_SEGMENT_DATA_SIZE(maximum_segment_data_size) {}
virtual ~CCAI(void) {}
// returns current rtt/delay
virtual float getCurrentDelay(void) const = 0;
// return the current believed window in bytes of how much data can be inflight,
//virtual float getCWnD(void) const = 0;
virtual float getWindow(void) const = 0;
// TODO: api for how much data we should send
// take time since last sent into account
// respect max_byterate_allowed
virtual size_t canSend(void) = 0;
virtual int64_t canSend(float time_delta) = 0;
// get the list of timed out seq_ids
virtual std::vector<SeqIDType> getTimeouts(void) const = 0;
// returns -1 if not implemented, can return 0
virtual int64_t inFlightCount(void) const { return -1; }
// returns -1 if not implemented, can return 0
virtual int64_t inFlightBytes(void) const { return -1; }
public: // callbacks
// data size is without overhead
virtual void onSent(SeqIDType seq, size_t data_size) = 0;

View File

@ -3,14 +3,27 @@
#include <cmath>
#include <iostream>
void CUBIC::updateReductionTimer(float time_delta) {
const auto now {getTimeNow()};
// only keep updating while the cca interaction is not too long ago
// or simply when there are packets in flight
// (you need space to resend timedout, which still use up pipe space)
if (!_in_flight.empty() || now - _time_point_last_update <= getCurrentDelay()*4.f) {
_time_since_reduction += time_delta;
}
}
void CUBIC::resetReductionTimer(void) {
_time_since_reduction = 0.f;
}
float CUBIC::getCWnD(void) const {
const double K = cbrt(
(_window_max * (1. - BETA)) / SCALING_CONSTANT
);
const double time_since_reduction = getTimeNow() - _time_point_reduction;
const double TK = time_since_reduction - K;
const double TK = _time_since_reduction - K;
const double cwnd =
SCALING_CONSTANT
@ -33,29 +46,69 @@ float CUBIC::getCWnD(void) const {
}
void CUBIC::onCongestion(void) {
if (getTimeNow() - _time_point_reduction >= getCurrentDelay()) {
const auto current_cwnd = getCWnD();
_time_point_reduction = getTimeNow();
_window_max = current_cwnd;
// 8 is probably too much (800ms for 100ms rtt)
if (_time_since_reduction >= getCurrentDelay()*4.f) {
const auto tmp_old_tp = _time_since_reduction;
std::cout << "CONGESTION! cwnd:" << current_cwnd << "\n";
const auto current_cwnd = getCWnD(); // TODO: remove, only used by logging?
const auto current_wnd = getWindow(); // respects cwnd and fwnd
resetReductionTimer();
if (current_cwnd < _window_max) {
// congestion before reaching the inflection point (prev window_max).
// reduce to wnd*beta to be fair
_window_max = current_wnd * BETA;
} else {
_window_max = current_wnd;
}
_window_max = std::max(_window_max, 2.0*MAXIMUM_SEGMENT_SIZE);
#if 1
std::cout << "----CONGESTION!"
<< " cwnd:" << current_cwnd
<< " wnd:" << current_wnd
<< " cwnd_max:" << _window_max
<< " pts:" << tmp_old_tp
<< " rtt:" << getCurrentDelay()
<< "\n"
;
#endif
}
}
size_t CUBIC::canSend(void) {
const auto fspace_pkgs = FlowOnly::canSend();
float CUBIC::getWindow(void) const {
return std::min<float>(getCWnD(), FlowOnly::getWindow());
}
int64_t CUBIC::canSend(float time_delta) {
const auto fspace_pkgs = FlowOnly::canSend(time_delta);
updateReductionTimer(time_delta);
if (fspace_pkgs == 0u) {
std::cerr << "CUBIC: flow said 0\n";
return 0u;
}
const int64_t cspace_bytes = getCWnD() - _in_flight_bytes;
const auto window = getCWnD();
int64_t cspace_bytes = window - _in_flight_bytes;
if (cspace_bytes < MAXIMUM_SEGMENT_DATA_SIZE) {
//std::cerr << "CUBIC: cspace < seg size\n";
return 0u;
}
// also limit to max sendrate per tick, which is usually smaller than window
// this is mostly to prevent spikes on empty windows
const auto rate = window / getCurrentDelay();
// we dont want this limit to fall below atleast 1 segment
const int64_t max_bytes_per_tick = std::max<int64_t>(rate * time_delta + 0.5f, MAXIMUM_SEGMENT_SIZE);
cspace_bytes = std::min<int64_t>(cspace_bytes, max_bytes_per_tick);
// limit to whole packets
size_t cspace_pkgs = std::floor(cspace_bytes / MAXIMUM_SEGMENT_DATA_SIZE) * MAXIMUM_SEGMENT_DATA_SIZE;
int64_t cspace_pkgs = (cspace_bytes / MAXIMUM_SEGMENT_DATA_SIZE) * MAXIMUM_SEGMENT_DATA_SIZE;
return std::min(cspace_pkgs, fspace_pkgs);
}

View File

@ -2,13 +2,9 @@
#include "./flow_only.hpp"
#include <chrono>
struct CUBIC : public FlowOnly {
//using clock = std::chrono::steady_clock;
public: // config
static constexpr float BETA {0.7f};
static constexpr float BETA {0.8f};
static constexpr float SCALING_CONSTANT {0.4f};
static constexpr float RTT_EMA_ALPHA = 0.1f; // 0.1 is very smooth, might need more
@ -16,37 +12,26 @@ struct CUBIC : public FlowOnly {
// window size before last reduciton
double _window_max {2.f * MAXIMUM_SEGMENT_SIZE}; // start with mss*2
//double _window_last_max {2.f * MAXIMUM_SEGMENT_SIZE};
double _time_point_reduction {getTimeNow()};
double _time_since_reduction {12.f}; // warm start
private:
void updateReductionTimer(float time_delta);
void resetReductionTimer(void);
float getCWnD(void) const;
// moving avg over the last few delay samples
// VERY sensitive to bundling acks
//float getCurrentDelay(void) const;
//void addRTT(float new_delay);
void onCongestion(void) override;
public: // api
CUBIC(size_t maximum_segment_data_size) : FlowOnly(maximum_segment_data_size) {}
virtual ~CUBIC(void) {}
float getWindow(void) const override;
// TODO: api for how much data we should send
// take time since last sent into account
// respect max_byterate_allowed
size_t canSend(void) override;
// get the list of timed out seq_ids
//std::vector<SeqIDType> getTimeouts(void) const override;
public: // callbacks
// data size is without overhead
//void onSent(SeqIDType seq, size_t data_size) override;
//void onAck(std::vector<SeqIDType> seqs) override;
// if discard, not resent, not inflight
//void onLoss(SeqIDType seq, bool discard) override;
int64_t canSend(float time_delta) override;
};

View File

@ -6,10 +6,16 @@
#include <algorithm>
float FlowOnly::getCurrentDelay(void) const {
return std::min(_rtt_ema, RTT_MAX);
// below 1ms is useless
return std::clamp(_rtt_ema, 0.001f, RTT_MAX);
}
void FlowOnly::addRTT(float new_delay) {
if (new_delay > _rtt_ema * RTT_UP_MAX) {
// too large a jump up, to be taken into account
return;
}
// lerp(new_delay, rtt_ema, 0.1)
_rtt_ema = RTT_EMA_ALPHA * new_delay + (1.f - RTT_EMA_ALPHA) * _rtt_ema;
}
@ -23,24 +29,59 @@ void FlowOnly::updateWindow(void) {
_fwnd = std::max(_fwnd, 2.f * MAXIMUM_SEGMENT_DATA_SIZE);
}
size_t FlowOnly::canSend(void) {
void FlowOnly::updateCongestion(void) {
updateWindow();
const auto tmp_window = getWindow();
// packet window * 0.3
// but atleast 4
int32_t max_consecutive_events = std::clamp<int32_t>(
(tmp_window/MAXIMUM_SEGMENT_DATA_SIZE) * 0.3f,
4,
50 // limit TODO: fix idle/time starved algo
);
// TODO: magic number
#if 0
std::cout << "NGC_FT1 Flow: pkg out of order"
<< " w:" << tmp_window
<< " pw:" << tmp_window/MAXIMUM_SEGMENT_DATA_SIZE
<< " coe:" << _consecutive_events
<< " mcoe:" << max_consecutive_events
<< "\n";
#endif
if (_consecutive_events > max_consecutive_events) {
//std::cout << "CONGESTION! NGC_FT1 flow: pkg out of order\n";
onCongestion();
// TODO: set _consecutive_events to zero?
}
}
float FlowOnly::getWindow(void) const {
return _fwnd;
}
int64_t FlowOnly::canSend(float time_delta) {
if (_in_flight.empty()) {
assert(_in_flight_bytes == 0);
return MAXIMUM_SEGMENT_DATA_SIZE;
// TODO: should we really exit early here??
return 2*MAXIMUM_SEGMENT_DATA_SIZE;
}
updateWindow();
const int64_t fspace = _fwnd - _in_flight_bytes;
int64_t fspace = _fwnd - _in_flight_bytes;
if (fspace < MAXIMUM_SEGMENT_DATA_SIZE) {
return 0u;
}
// limit to whole packets
size_t space = std::floor(fspace / MAXIMUM_SEGMENT_DATA_SIZE)
* MAXIMUM_SEGMENT_DATA_SIZE;
// also limit to max sendrate per tick, which is usually smaller than window
// this is mostly to prevent spikes on empty windows
fspace = std::min<int64_t>(fspace, max_byterate_allowed * time_delta + 0.5f);
return space;
// limit to whole packets
return (fspace / MAXIMUM_SEGMENT_DATA_SIZE) * MAXIMUM_SEGMENT_DATA_SIZE;
}
std::vector<FlowOnly::SeqIDType> FlowOnly::getTimeouts(void) const {
@ -49,7 +90,7 @@ std::vector<FlowOnly::SeqIDType> FlowOnly::getTimeouts(void) const {
// after 3 rtt delay, we trigger timeout
const auto now_adjusted = getTimeNow() - getCurrentDelay()*3.f;
for (const auto& [seq, time_stamp, size] : _in_flight) {
for (const auto& [seq, time_stamp, size, _] : _in_flight) {
if (now_adjusted > time_stamp) {
list.push_back(seq);
}
@ -58,16 +99,35 @@ std::vector<FlowOnly::SeqIDType> FlowOnly::getTimeouts(void) const {
return list;
}
int64_t FlowOnly::inFlightCount(void) const {
return _in_flight.size();
}
int64_t FlowOnly::inFlightBytes(void) const {
return _in_flight_bytes;
}
void FlowOnly::onSent(SeqIDType seq, size_t data_size) {
if constexpr (true) {
size_t sum {0u};
for (const auto& it : _in_flight) {
assert(std::get<0>(it) != seq);
assert(it.id != seq);
sum += it.bytes;
}
assert(_in_flight_bytes == sum);
}
_in_flight.push_back({seq, getTimeNow(), data_size + SEGMENT_OVERHEAD});
_in_flight_bytes += data_size + SEGMENT_OVERHEAD;
//_recently_sent_bytes += data_size + SEGMENT_OVERHEAD;
const auto& new_entry = _in_flight.emplace_back(
FlyingBunch{
seq,
static_cast<float>(getTimeNow()),
data_size + SEGMENT_OVERHEAD,
false
}
);
_in_flight_bytes += new_entry.bytes;
_time_point_last_update = getTimeNow();
}
void FlowOnly::onAck(std::vector<SeqIDType> seqs) {
@ -78,28 +138,31 @@ void FlowOnly::onAck(std::vector<SeqIDType> seqs) {
const auto now {getTimeNow()};
_time_point_last_update = now;
// first seq in seqs is the actual value, all extra are for redundency
{ // skip in ack is congestion event
// 1. look at primary ack of packet
auto it = std::find_if(_in_flight.begin(), _in_flight.end(), [seq = seqs.front()](const auto& v) -> bool {
return std::get<0>(v) == seq;
return v.id == seq;
});
if (it != _in_flight.end()) {
if (it != _in_flight.begin()) {
if (it != _in_flight.end() && !it->ignore) {
// find first non ignore, it should be the expected
auto first_it = std::find_if_not(_in_flight.cbegin(), _in_flight.cend(), [](const auto& v) -> bool { return v.ignore; });
if (first_it != _in_flight.cend() && it != first_it) {
// not next expected seq -> skip detected
std::cout << "CONGESTION out of order\n";
onCongestion();
//if (getTimeNow() >= _last_congestion_event + _last_congestion_rtt) {
//_recently_lost_data = true;
//_last_congestion_event = getTimeNow();
//_last_congestion_rtt = getCurrentDelay();
//}
_consecutive_events++;
it->ignore = true; // only handle once
updateCongestion();
} else {
// only mesure delay, if not a congestion
addRTT(now - std::get<1>(*it));
addRTT(now - it->timestamp);
_consecutive_events = 0;
}
} else {
} else { // TOOD: if ! ignore too
// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#if 0
// assume we got a duplicated packet
@ -111,14 +174,14 @@ void FlowOnly::onAck(std::vector<SeqIDType> seqs) {
for (const auto& seq : seqs) {
auto it = std::find_if(_in_flight.begin(), _in_flight.end(), [seq](const auto& v) -> bool {
return std::get<0>(v) == seq;
return v.id == seq;
});
if (it == _in_flight.end()) {
continue; // not found, ignore
} else {
//most_recent = std::max(most_recent, std::get<1>(*it));
_in_flight_bytes -= std::get<2>(*it);
_in_flight_bytes -= it->bytes;
assert(_in_flight_bytes >= 0);
//_recently_acked_data += std::get<2>(*it);
_in_flight.erase(it);
@ -128,8 +191,8 @@ void FlowOnly::onAck(std::vector<SeqIDType> seqs) {
void FlowOnly::onLoss(SeqIDType seq, bool discard) {
auto it = std::find_if(_in_flight.begin(), _in_flight.end(), [seq](const auto& v) -> bool {
assert(!std::isnan(std::get<1>(v)));
return std::get<0>(v) == seq;
assert(!std::isnan(v.timestamp));
return v.id == seq;
});
if (it == _in_flight.end()) {
@ -137,24 +200,27 @@ void FlowOnly::onLoss(SeqIDType seq, bool discard) {
return; // not found, ignore ??
}
std::cerr << "FLOW loss\n";
//std::cerr << "FLOW loss\n";
// "if data lost is not to be retransmitted"
if (discard) {
_in_flight_bytes -= std::get<2>(*it);
_in_flight_bytes -= it->bytes;
assert(_in_flight_bytes >= 0);
_in_flight.erase(it);
} else {
// and not take into rtt
it->timestamp = getTimeNow();
it->ignore = true;
}
// TODO: reset timestamp?
#if 0 // temporarily disable ce for timeout
// at most once per rtt?
// TODO: use delay at event instead
if (getTimeNow() >= _last_congestion_event + _last_congestion_rtt) {
_recently_lost_data = true;
_last_congestion_event = getTimeNow();
_last_congestion_rtt = getCurrentDelay();
// usually after data arrived out-of-order/duplicate
if (!it->ignore) {
it->ignore = true; // only handle once
//_consecutive_events++;
//updateCongestion();
// this is usually a safe indicator for congestion/maxed connection
onCongestion();
}
#endif
}

View File

@ -4,23 +4,15 @@
#include <chrono>
#include <vector>
#include <tuple>
struct FlowOnly : public CCAI {
protected:
using clock = std::chrono::steady_clock;
public: // config
static constexpr float RTT_EMA_ALPHA = 0.1f; // might need over time
static constexpr float RTT_MAX = 2.f; // 2 sec is probably too much
//float max_byterate_allowed {100.f*1024*1024}; // 100MiB/s
float max_byterate_allowed {10.f*1024*1024}; // 10MiB/s
//float max_byterate_allowed {1.f*1024*1024}; // 1MiB/s
//float max_byterate_allowed {0.6f*1024*1024}; // 600KiB/s
//float max_byterate_allowed {0.5f*1024*1024}; // 500KiB/s
//float max_byterate_allowed {0.05f*1024*1024}; // 50KiB/s
//float max_byterate_allowed {0.15f*1024*1024}; // 150KiB/s
static constexpr float RTT_EMA_ALPHA = 0.001f; // might need change over time
static constexpr float RTT_UP_MAX = 3.0f; // how much larger a delay can be to be taken into account
static constexpr float RTT_MAX = 2.f; // maybe larger for tunneled connections
protected:
// initialize to low value, will get corrected very fast
@ -30,11 +22,24 @@ struct FlowOnly : public CCAI {
float _rtt_ema {0.1f};
// list of sequence ids and timestamps of when they where sent (and payload size)
std::vector<std::tuple<SeqIDType, float, size_t>> _in_flight;
struct FlyingBunch {
SeqIDType id;
float timestamp;
size_t bytes;
// set to true if counted as ce or resent due to timeout
bool ignore {false};
};
std::vector<FlyingBunch> _in_flight;
int64_t _in_flight_bytes {0};
int32_t _consecutive_events {0};
clock::time_point _time_start_offset;
// used to clamp growth rate in the void
double _time_point_last_update {getTimeNow()};
protected:
// make values relative to algo start for readability (and precision)
// get timestamp in seconds
@ -44,7 +49,10 @@ struct FlowOnly : public CCAI {
// moving avg over the last few delay samples
// VERY sensitive to bundling acks
float getCurrentDelay(void) const;
float getCurrentDelay(void) const override;
// call updateWindow() to update this value
float getWindow(void) const override;
void addRTT(float new_delay);
@ -52,17 +60,24 @@ struct FlowOnly : public CCAI {
virtual void onCongestion(void) {};
// internal logic, calls the onCongestion() event
void updateCongestion(void);
public: // api
FlowOnly(size_t maximum_segment_data_size) : CCAI(maximum_segment_data_size) {}
virtual ~FlowOnly(void) {}
// TODO: api for how much data we should send
// take time since last sent into account
// respect max_byterate_allowed
size_t canSend(void) override;
int64_t canSend(float time_delta) override;
// get the list of timed out seq_ids
std::vector<SeqIDType> getTimeouts(void) const override;
int64_t inFlightCount(void) const override;
int64_t inFlightBytes(void) const override;
public: // callbacks
// data size is without overhead
void onSent(SeqIDType seq, size_t data_size) override;

View File

@ -6,6 +6,7 @@
#include <deque>
#include <cstdint>
#include <cassert>
#include <tuple>
#include <iomanip>
#include <iostream>
@ -13,13 +14,22 @@
// https://youtu.be/0HRwNSA-JYM
static bool isSkipSeqID(const std::pair<uint8_t, uint16_t>& a, const std::pair<uint8_t, uint16_t>& b) {
// this is not perfect, would need more ft id based history
if (a.first != b.first) {
return false; // we dont know
} else {
return a.second+1 != b.second;
}
}
inline constexpr bool PLOTTING = false;
LEDBAT::LEDBAT(size_t maximum_segment_data_size) : CCAI(maximum_segment_data_size) {
_time_start_offset = clock::now();
}
size_t LEDBAT::canSend(void) {
int64_t LEDBAT::canSend(float time_delta) {
if (_in_flight.empty()) {
return MAXIMUM_SEGMENT_DATA_SIZE;
}
@ -34,9 +44,7 @@ size_t LEDBAT::canSend(void) {
return 0u;
}
size_t space = std::ceil(std::min<float>(cspace, fspace) / MAXIMUM_SEGMENT_DATA_SIZE) * MAXIMUM_SEGMENT_DATA_SIZE;
return space;
return std::ceil(std::min<float>(cspace, fspace) / MAXIMUM_SEGMENT_DATA_SIZE) * MAXIMUM_SEGMENT_DATA_SIZE;
}
std::vector<LEDBAT::SeqIDType> LEDBAT::getTimeouts(void) const {

View File

@ -11,7 +11,7 @@
// LEDBAT++: https://www.ietf.org/archive/id/draft-irtf-iccrg-ledbat-plus-plus-01.txt
// LEDBAT++ implementation
struct LEDBAT : public CCAI{
struct LEDBAT : public CCAI {
public: // config
#if 0
using SeqIDType = std::pair<uint8_t, uint16_t>; // tf_id, seq_id
@ -47,21 +47,20 @@ struct LEDBAT : public CCAI{
//static constexpr size_t rtt_buffer_size_max {2000};
float max_byterate_allowed {10*1024*1024}; // 10MiB/s
public:
LEDBAT(size_t maximum_segment_data_size);
virtual ~LEDBAT(void) {}
// return the current believed window in bytes of how much data can be inflight,
// without overstepping the delay requirement
float getCWnD(void) const {
float getWindow(void) const override {
return _cwnd;
}
// TODO: api for how much data we should send
// take time since last sent into account
// respect max_byterate_allowed
size_t canSend(void) override;
int64_t canSend(float time_delta) override;
// get the list of timed out seq_ids
std::vector<SeqIDType> getTimeouts(void) const override;
@ -86,7 +85,7 @@ struct LEDBAT : public CCAI{
// moving avg over the last few delay samples
// VERY sensitive to bundling acks
float getCurrentDelay(void) const;
float getCurrentDelay(void) const override;
void addRTT(float new_delay);

View File

@ -1,149 +1,21 @@
#include "./ngcft1.hpp"
#include <solanaceae/toxcore/utils.hpp>
#include "./flow_only.hpp"
#include "./cubic.hpp"
#include "./ledbat.hpp"
#include <solanaceae/util/utils.hpp>
#include <sodium.h>
#include <cstdint>
#include <iostream>
#include <set>
#include <algorithm>
#include <cassert>
#include <vector>
bool NGCFT1::sendPKG_FT1_REQUEST(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
) {
// - 1 byte packet id
// - 4 byte file_kind
// - X bytes file_id
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_REQUEST));
for (size_t i = 0; i < sizeof(file_kind); i++) {
pkg.push_back((file_kind>>(i*8)) & 0xff);
}
for (size_t i = 0; i < file_id_size; i++) {
pkg.push_back(file_id[i]);
}
// lossless
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCFT1::sendPKG_FT1_INIT(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
uint64_t file_size,
uint8_t transfer_id,
const uint8_t* file_id, size_t file_id_size
) {
// - 1 byte packet id
// - 4 byte (file_kind)
// - 8 bytes (data size)
// - 1 byte (temporary_file_tf_id, for this peer only, technically just a prefix to distinguish between simultainious fts)
// - X bytes (file_kind dependent id, differnt sizes)
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_INIT));
for (size_t i = 0; i < sizeof(file_kind); i++) {
pkg.push_back((file_kind>>(i*8)) & 0xff);
}
for (size_t i = 0; i < sizeof(file_size); i++) {
pkg.push_back((file_size>>(i*8)) & 0xff);
}
pkg.push_back(transfer_id);
for (size_t i = 0; i < file_id_size; i++) {
pkg.push_back(file_id[i]);
}
// lossless
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCFT1::sendPKG_FT1_INIT_ACK(
uint32_t group_number, uint32_t peer_number,
uint8_t transfer_id
) {
// - 1 byte packet id
// - 1 byte transfer_id
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_INIT_ACK));
pkg.push_back(transfer_id);
// lossless
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCFT1::sendPKG_FT1_DATA(
uint32_t group_number, uint32_t peer_number,
uint8_t transfer_id,
uint16_t sequence_id,
const uint8_t* data, size_t data_size
) {
assert(data_size > 0);
// TODO
// check header_size+data_size <= max pkg size
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_DATA));
pkg.push_back(transfer_id);
pkg.push_back(sequence_id & 0xff);
pkg.push_back((sequence_id >> (1*8)) & 0xff);
// TODO: optimize
for (size_t i = 0; i < data_size; i++) {
pkg.push_back(data[i]);
}
// lossy
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, false, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCFT1::sendPKG_FT1_DATA_ACK(
uint32_t group_number, uint32_t peer_number,
uint8_t transfer_id,
const uint16_t* seq_ids, size_t seq_ids_size
) {
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_DATA_ACK));
pkg.push_back(transfer_id);
// TODO: optimize
for (size_t i = 0; i < seq_ids_size; i++) {
pkg.push_back(seq_ids[i] & 0xff);
pkg.push_back((seq_ids[i] >> (1*8)) & 0xff);
}
// lossy
return _t.toxGroupSendCustomPrivatePacket(group_number, peer_number, false, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PRIVATE_PACKET_OK;
}
bool NGCFT1::sendPKG_FT1_MESSAGE(
uint32_t group_number,
uint32_t message_id,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
) {
std::vector<uint8_t> pkg;
pkg.push_back(static_cast<uint8_t>(NGCEXT_Event::FT1_MESSAGE));
for (size_t i = 0; i < sizeof(message_id); i++) {
pkg.push_back((message_id>>(i*8)) & 0xff);
}
for (size_t i = 0; i < sizeof(file_kind); i++) {
pkg.push_back((file_kind>>(i*8)) & 0xff);
}
for (size_t i = 0; i < file_id_size; i++) {
pkg.push_back(file_id[i]);
}
// lossless
return _t.toxGroupSendCustomPacket(group_number, true, pkg) == TOX_ERR_GROUP_SEND_CUSTOM_PACKET_OK;
}
void NGCFT1::updateSendTransfer(float time_delta, uint32_t group_number, uint32_t peer_number, Group::Peer& peer, size_t idx, std::set<CCAI::SeqIDType>& timeouts_set) {
void NGCFT1::updateSendTransfer(float time_delta, uint32_t group_number, uint32_t peer_number, Group::Peer& peer, size_t idx, std::set<CCAI::SeqIDType>& timeouts_set, int64_t& can_packet_size) {
auto& tf_opt = peer.send_transfers.at(idx);
assert(tf_opt.has_value());
auto& tf = tf_opt.value();
@ -168,29 +40,52 @@ void NGCFT1::updateSendTransfer(float time_delta, uint32_t group_number, uint32_
} else {
// timed out, resend
std::cerr << "NGCFT1 warning: ft init timed out, resending\n";
sendPKG_FT1_INIT(group_number, peer_number, tf.file_kind, tf.file_size, idx, tf.file_id.data(), tf.file_id.size());
_neep.send_ft1_init(group_number, peer_number, tf.file_kind, tf.file_size, idx, tf.file_id.data(), tf.file_id.size());
tf.inits_sent++;
tf.time_since_activity = 0.f;
}
}
//break;
return;
case State::SENDING: {
tf.ssb.for_each(time_delta, [&](uint16_t id, const std::vector<uint8_t>& data, float& time_since_activity) {
// no ack after 5 sec -> resend
//if (time_since_activity >= ngc_ft1_ctx->options.sending_resend_without_ack_after) {
if (timeouts_set.count({idx, id})) {
// TODO: can fail
sendPKG_FT1_DATA(group_number, peer_number, idx, id, data.data(), data.size());
break;
case State::FINISHING: // we still have unacked packets
tf.ssb.for_each(time_delta, [&](uint16_t id, const std::vector<uint8_t>& data, float& time_since_activity) {
if (timeouts_set.count({idx, id})) {
if (can_packet_size >= data.size()) {
_neep.send_ft1_data(group_number, peer_number, idx, id, data.data(), data.size());
peer.cca->onLoss({idx, id}, false);
time_since_activity = 0.f;
timeouts_set.erase({idx, id});
can_packet_size -= data.size();
} else {
std::cerr << "NGCFT1 warning: no space to resend timedout\n";
}
}
});
if (tf.time_since_activity >= sending_give_up_after) {
// no ack after 30sec, close ft
std::cerr << "NGCFT1 warning: sending ft finishing timed out, deleting\n";
dispatch(
NGCFT1_Event::send_done,
Events::NGCFT1_send_done{
group_number, peer_number,
static_cast<uint8_t>(idx),
}
);
// clean up cca
tf.ssb.for_each(time_delta, [&](uint16_t id, const std::vector<uint8_t>& data, float& time_since_activity) {
peer.cca->onLoss({idx, id}, true);
timeouts_set.erase({idx, id});
});
if (tf.time_since_activity >= sending_give_up_after) {
tf_opt.reset();
}
break;
case State::SENDING: {
// first handle overall timeout (could otherwise do resends directly before, which is useless)
// timeout increases with active transfers (otherwise we could starve them)
if (tf.time_since_activity >= (sending_give_up_after * peer.active_send_transfers)) {
// no ack after 30sec, close ft
std::cerr << "NGCFT1 warning: sending ft in progress timed out, deleting\n";
std::cerr << "NGCFT1 warning: sending ft in progress timed out, deleting (ifc:" << peer.cca->inFlightCount() << ")\n";
dispatch(
NGCFT1_Event::send_done,
Events::NGCFT1_send_done{
@ -210,25 +105,26 @@ void NGCFT1::updateSendTransfer(float time_delta, uint32_t group_number, uint32_
return;
}
// do resends
tf.ssb.for_each(time_delta, [&](uint16_t id, const std::vector<uint8_t>& data, float& time_since_activity) {
if (can_packet_size >= data.size() && time_since_activity >= peer.cca->getCurrentDelay() && timeouts_set.count({idx, id})) {
// TODO: can fail
_neep.send_ft1_data(group_number, peer_number, idx, id, data.data(), data.size());
peer.cca->onLoss({idx, id}, false);
time_since_activity = 0.f;
timeouts_set.erase({idx, id});
can_packet_size -= data.size();
}
});
// if chunks in flight < window size (2)
//while (tf.ssb.size() < ngc_ft1_ctx->options.packet_window_size) {
int64_t can_packet_size {static_cast<int64_t>(peer.cca->canSend())};
//if (can_packet_size) {
//std::cerr << "FT: can_packet_size: " << can_packet_size;
//}
size_t count {0};
while (can_packet_size > 0 && tf.file_size > 0) {
std::vector<uint8_t> new_data;
// TODO: parameterize packet size? -> only if JF increases lossy packet size >:)
//size_t chunk_size = std::min<size_t>(496u, tf.file_size - tf.file_size_current);
//size_t chunk_size = std::min<size_t>(can_packet_size, tf.file_size - tf.file_size_current);
size_t chunk_size = std::min<size_t>({
//496u,
//996u,
peer.cca->MAXIMUM_SEGMENT_DATA_SIZE,
static_cast<size_t>(can_packet_size),
tf.file_size - tf.file_size_current
static_cast<size_t>(tf.file_size - tf.file_size_current),
});
if (chunk_size == 0) {
tf.state = State::FINISHING;
@ -237,14 +133,6 @@ void NGCFT1::updateSendTransfer(float time_delta, uint32_t group_number, uint32_
new_data.resize(chunk_size);
//ngc_ft1_ctx->cb_send_data[tf.file_kind](
//tox,
//group_number, peer_number,
//idx,
//tf.file_size_current,
//new_data.data(), new_data.size(),
//ngc_ft1_ctx->ud_send_data.count(tf.file_kind) ? ngc_ft1_ctx->ud_send_data.at(tf.file_kind) : nullptr
//);
assert(idx <= 0xffu);
// TODO: check return value
dispatch(
@ -253,112 +141,187 @@ void NGCFT1::updateSendTransfer(float time_delta, uint32_t group_number, uint32_
group_number, peer_number,
static_cast<uint8_t>(idx),
tf.file_size_current,
new_data.data(), new_data.size(),
new_data.data(), static_cast<uint32_t>(new_data.size()),
}
);
uint16_t seq_id = tf.ssb.add(std::move(new_data));
sendPKG_FT1_DATA(group_number, peer_number, idx, seq_id, tf.ssb.entries.at(seq_id).data.data(), tf.ssb.entries.at(seq_id).data.size());
peer.cca->onSent({idx, seq_id}, chunk_size);
#if defined(EXTRA_LOGGING) && EXTRA_LOGGING == 1
fprintf(stderr, "FT: sent data size: %ld (seq %d)\n", chunk_size, seq_id);
#endif
const bool sent = _neep.send_ft1_data(group_number, peer_number, idx, seq_id, tf.ssb.entries.at(seq_id).data.data(), tf.ssb.entries.at(seq_id).data.size());
if (sent) {
peer.cca->onSent({idx, seq_id}, chunk_size);
} else {
std::cerr << "NGCFT1: failed to send packet (queue full?) --------------\n";
peer.cca->onLoss({idx, seq_id}, false); // HACK: fake congestion event
// TODO: onCongestion
can_packet_size = 0;
}
tf.file_size_current += chunk_size;
can_packet_size -= chunk_size;
count++;
}
//if (count) {
//std::cerr << " split over " << count << "\n";
//}
}
break;
case State::FINISHING: // we still have unacked packets
tf.ssb.for_each(time_delta, [&](uint16_t id, const std::vector<uint8_t>& data, float& time_since_activity) {
// no ack after 5 sec -> resend
//if (time_since_activity >= ngc_ft1_ctx->options.sending_resend_without_ack_after) {
if (timeouts_set.count({idx, id})) {
sendPKG_FT1_DATA(group_number, peer_number, idx, id, data.data(), data.size());
peer.cca->onLoss({idx, id}, false);
time_since_activity = 0.f;
timeouts_set.erase({idx, id});
}
});
if (tf.time_since_activity >= sending_give_up_after) {
// no ack after 30sec, close ft
// TODO: notify app
std::cerr << "NGCFT1 warning: sending ft finishing timed out, deleting\n";
// clean up cca
tf.ssb.for_each(time_delta, [&](uint16_t id, const std::vector<uint8_t>& data, float& time_since_activity) {
peer.cca->onLoss({idx, id}, true);
timeouts_set.erase({idx, id});
});
tf_opt.reset();
}
break;
default: // invalid state, delete
std::cerr << "NGCFT1 error: ft in invalid state, deleting\n";
assert(false && "ft in invalid state");
tf_opt.reset();
//continue;
return;
}
}
void NGCFT1::iteratePeer(float time_delta, uint32_t group_number, uint32_t peer_number, Group::Peer& peer) {
auto timeouts = peer.cca->getTimeouts();
std::set<CCAI::SeqIDType> timeouts_set{timeouts.cbegin(), timeouts.cend()};
if (peer.cca) {
auto timeouts = peer.cca->getTimeouts();
std::set<CCAI::SeqIDType> timeouts_set{timeouts.cbegin(), timeouts.cend()};
for (size_t idx = 0; idx < peer.send_transfers.size(); idx++) {
if (peer.send_transfers.at(idx).has_value()) {
updateSendTransfer(time_delta, group_number, peer_number, peer, idx, timeouts_set);
int64_t can_packet_size {peer.cca->canSend(time_delta)}; // might get more space while iterating (time)
// get number current running transfers TODO: improve
peer.active_send_transfers = 0;
for (const auto& it : peer.send_transfers) {
if (it.has_value()) {
peer.active_send_transfers++;
}
}
// change iterate start position to not starve transfers in the back
size_t iterated_count = 0;
bool last_send_found = false;
for (size_t idx = peer.next_send_transfer_send_idx; iterated_count < peer.send_transfers.size(); idx++, iterated_count++) {
idx = idx % peer.send_transfers.size();
if (peer.send_transfers.at(idx).has_value()) {
if (!last_send_found && can_packet_size <= 0) {
peer.next_send_transfer_send_idx = idx;
last_send_found = true; // only set once
}
updateSendTransfer(time_delta, group_number, peer_number, peer, idx, timeouts_set, can_packet_size);
}
}
}
// TODO: receiving tranfers?
for (size_t idx = 0; idx < peer.recv_transfers.size(); idx++) {
if (!peer.recv_transfers.at(idx).has_value()) {
continue;
}
auto& transfer = peer.recv_transfers.at(idx).value();
// proper switch case?
if (transfer.state == Group::Peer::RecvTransfer::State::FINISHING) {
transfer.finishing_timer -= time_delta;
if (transfer.finishing_timer <= 0.f) {
//dispatch(
// NGCFT1_Event::recv_done,
// Events::NGCFT1_recv_done{
// group_number, peer_number,
// uint8_t(idx)
// }
//);
peer.recv_transfers.at(idx).reset();
}
}
}
}
const CCAI* NGCFT1::getPeerCCA(
uint32_t group_number,
uint32_t peer_number
) const {
auto group_it = groups.find(group_number);
if (group_it == groups.end()) {
return nullptr;;
}
auto peer_it = group_it->second.peers.find(peer_number);
if (peer_it == group_it->second.peers.end()) {
return nullptr;;
}
const auto& cca_ptr = peer_it->second.cca;
if (!cca_ptr) {
return nullptr;;
}
return cca_ptr.get();
}
NGCFT1::NGCFT1(
ToxI& t,
ToxEventProviderI& tep,
NGCEXTEventProviderI& neep
) : _t(t), _tep(tep), _neep(neep)
NGCEXTEventProvider& neep
) : _t(t), _tep(tep), _tep_sr(_tep.newSubRef(this)), _neep(neep), _neep_sr(_neep.newSubRef(this))
{
_neep.subscribe(this, NGCEXT_Event::FT1_REQUEST);
_neep.subscribe(this, NGCEXT_Event::FT1_INIT);
_neep.subscribe(this, NGCEXT_Event::FT1_INIT_ACK);
_neep.subscribe(this, NGCEXT_Event::FT1_DATA);
_neep.subscribe(this, NGCEXT_Event::FT1_DATA_ACK);
_neep.subscribe(this, NGCEXT_Event::FT1_MESSAGE);
_neep_sr
.subscribe(NGCEXT_Event::FT1_REQUEST)
.subscribe(NGCEXT_Event::FT1_INIT)
.subscribe(NGCEXT_Event::FT1_INIT_ACK)
.subscribe(NGCEXT_Event::FT1_DATA)
.subscribe(NGCEXT_Event::FT1_DATA_ACK)
.subscribe(NGCEXT_Event::FT1_MESSAGE)
;
_tep.subscribe(this, Tox_Event::TOX_EVENT_GROUP_PEER_EXIT);
_tep_sr.subscribe(Tox_Event_Type::TOX_EVENT_GROUP_PEER_EXIT);
}
void NGCFT1::iterate(float time_delta) {
float NGCFT1::iterate(float time_delta) {
_time_since_activity += time_delta;
bool transfer_in_progress {false};
for (auto& [group_number, group] : groups) {
for (auto& [peer_number, peer] : group.peers) {
iteratePeer(time_delta, group_number, peer_number, peer);
// find any active transfer
if (!transfer_in_progress) {
for (const auto& t : peer.send_transfers) {
if (t.has_value()) {
transfer_in_progress = true;
break;
}
}
}
if (!transfer_in_progress) {
for (const auto& t : peer.recv_transfers) {
if (t.has_value()) {
transfer_in_progress = true;
break;
}
}
}
}
}
if (transfer_in_progress) {
_time_since_activity = 0.f;
// ~15ms for up to 1mb/s
// ~5ms for up to 4mb/s
return 0.005f; // 5ms
} else if (_time_since_activity < 0.5f) {
// bc of temporality
return 0.025f;
} else {
return 1.f; // once a sec might be too little
}
}
void NGCFT1::NGC_FT1_send_request_private(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
const uint8_t* file_id, uint32_t file_id_size
) {
// TODO: error check
sendPKG_FT1_REQUEST(group_number, peer_number, file_kind, file_id, file_id_size);
_neep.send_ft1_request(group_number, peer_number, file_kind, file_id, file_id_size);
}
bool NGCFT1::NGC_FT1_send_init_private(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size,
size_t file_size,
uint8_t* transfer_id
const uint8_t* file_id, uint32_t file_id_size,
uint64_t file_size,
uint8_t* transfer_id,
bool can_compress
) {
if (std::get<0>(_t.toxGroupPeerGetConnectionStatus(group_number, peer_number)).value_or(TOX_CONNECTION_NONE) == TOX_CONNECTION_NONE) {
std::cerr << "NGCFT1 error: cant init ft, peer offline\n";
@ -388,10 +351,12 @@ bool NGCFT1::NGC_FT1_send_init_private(
std::cerr << "NGCFT1 error: cant init ft, no free transfer slot\n";
return false;
}
idx = i;
}
// TODO: check return value
sendPKG_FT1_INIT(group_number, peer_number, file_kind, file_size, idx, file_id, file_id_size);
_neep.send_ft1_init(group_number, peer_number, file_kind, file_size, idx, file_id, file_id_size);
peer.send_transfers[idx] = Group::Peer::SendTransfer{
file_kind,
@ -415,18 +380,64 @@ bool NGCFT1::NGC_FT1_send_message_public(
uint32_t group_number,
uint32_t& message_id,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
const uint8_t* file_id, uint32_t file_id_size
) {
// create msg_id
message_id = randombytes_random();
// TODO: check return value
return sendPKG_FT1_MESSAGE(group_number, message_id, file_kind, file_id, file_id_size);
return _neep.send_all_ft1_message(group_number, message_id, file_kind, file_id, file_id_size);
}
float NGCFT1::getPeerDelay(uint32_t group_number, uint32_t peer_number) const {
auto* cca_ptr = getPeerCCA(group_number, peer_number);
if (cca_ptr == nullptr) {
return -1.f;
}
return cca_ptr->getCurrentDelay();
}
float NGCFT1::getPeerWindow(uint32_t group_number, uint32_t peer_number) const {
auto* cca_ptr = getPeerCCA(group_number, peer_number);
if (cca_ptr == nullptr) {
return -1.f;
}
return cca_ptr->getWindow();
}
int64_t NGCFT1::getPeerInFlightPackets(
uint32_t group_number,
uint32_t peer_number
) const {
auto* cca_ptr = getPeerCCA(group_number, peer_number);
if (cca_ptr == nullptr) {
return -1;
}
return cca_ptr->inFlightCount();
}
int64_t NGCFT1::getPeerInFlightBytes(
uint32_t group_number,
uint32_t peer_number
) const {
auto* cca_ptr = getPeerCCA(group_number, peer_number);
if (cca_ptr == nullptr) {
return -1;
}
return cca_ptr->inFlightCount();
}
bool NGCFT1::onEvent(const Events::NGCEXT_ft1_request& e) {
//#if !NDEBUG
std::cout << "NGCFT1: FT1_REQUEST fk:" << e.file_kind << " [" << bin2hex(e.file_id) << "]\n";
std::cout << "NGCFT1: got FT1_REQUEST fk:" << e.file_kind << " [" << bin2hex(e.file_id) << "]\n";
//#endif
// .... just rethrow??
@ -436,23 +447,23 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_request& e) {
Events::NGCFT1_recv_request{
e.group_number, e.peer_number,
static_cast<NGCFT1_file_kind>(e.file_kind),
e.file_id.data(), e.file_id.size()
e.file_id.data(), static_cast<uint32_t>(e.file_id.size())
}
);
}
bool NGCFT1::onEvent(const Events::NGCEXT_ft1_init& e) {
//#if !NDEBUG
std::cout << "NGCFT1: FT1_INIT fk:" << e.file_kind << " fs:" << e.file_size << " tid:" << int(e.transfer_id) << " [" << bin2hex(e.file_id) << "]\n";
std::cout << "NGCFT1: got FT1_INIT fk:" << e.file_kind << " fs:" << e.file_size << " tid:" << int(e.transfer_id) << " [" << bin2hex(e.file_id) << "]\n";
//#endif
#if 0
bool accept = false;
dispatch(
NGCFT1_Event::recv_init,
Events::NGCFT1_recv_init{
e.group_number, e.peer_number,
static_cast<NGCFT1_file_kind>(e.file_kind),
e.file_id.data(), e.file_id.size(),
e.file_id.data(), static_cast<uint32_t>(e.file_id.size()),
e.transfer_id,
e.file_size,
accept
@ -464,13 +475,13 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_init& e) {
return true; // return true?
}
sendPKG_FT1_INIT_ACK(e.group_number, e.peer_number, e.transfer_id);
_neep.send_ft1_init_ack(e.group_number, e.peer_number, e.transfer_id);
std::cout << "NGCFT1: accepted init\n";
auto& peer = groups[e.group_number].peers[e.peer_number];
if (peer.recv_transfers[e.transfer_id].has_value()) {
std::cerr << "NGCFT1 warning: overwriting existing recv_transfer " << int(e.transfer_id) << "\n";
std::cerr << "NGCFT1 warning: overwriting existing recv_transfer " << int(e.transfer_id) << ", other peer started new transfer on preexising\n";
}
peer.recv_transfers[e.transfer_id] = Group::Peer::RecvTransfer{
@ -481,13 +492,24 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_init& e) {
0u,
{} // rsb
};
return true;
#else
// HACK: simply forward to init2 hanlder
return onEvent(Events::NGCEXT_ft1_init2{
e.group_number,
e.peer_number,
e.file_kind,
e.file_size,
e.transfer_id,
0x00, // non set
e.file_id, // sadly a copy, wont matter in the future
});
#endif
}
bool NGCFT1::onEvent(const Events::NGCEXT_ft1_init_ack& e) {
//#if !NDEBUG
std::cout << "NGCFT1: FT1_INIT_ACK\n";
std::cout << "NGCFT1: got FT1_INIT_ACK mds:" << e.max_lossy_data_size << "\n";
//#endif
// we now should start sending data
@ -507,10 +529,35 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_init_ack& e) {
using State = Group::Peer::SendTransfer::State;
if (transfer.state != State::INIT_SENT) {
std::cerr << "NGCFT1 error: inti_ack but not in INIT_SENT state\n";
std::cerr << "NGCFT1 error: init_ack but not in INIT_SENT state\n";
return true;
}
if (e.max_lossy_data_size < 16) {
std::cerr << "NGCFT1 error: init_ack max_lossy_data_size is less than 16 bytes\n";
return true;
}
// negotiated packet_data_size
const auto negotiated_packet_data_size = std::min<uint32_t>(e.max_lossy_data_size, _t.toxGroupMaxCustomLossyPacketLength()-4);
// TODO: reset cca with new pkg size
if (!peer.cca) {
// make random max of [1020-1220]
const uint32_t random_max_data_size = (1024-4) + _rng()%201;
const uint32_t randomized_negotiated_packet_data_size = std::min(negotiated_packet_data_size, random_max_data_size);
peer.max_packet_data_size = randomized_negotiated_packet_data_size;
std::cerr << "NGCFT1: creating cca with max:" << peer.max_packet_data_size << "\n";
peer.cca = std::make_unique<CUBIC>(peer.max_packet_data_size);
//peer.cca = std::make_unique<LEDBAT>(peer.max_packet_data_size);
//peer.cca = std::make_unique<FlowOnly>(peer.max_packet_data_size);
//peer.cca->max_byterate_allowed = 1.f *1024*1024;
} else {
std::cerr << "NGCFT1: reusing cca. rtt:" << peer.cca->getCurrentDelay() << " w:" << peer.cca->getWindow() << " ifc:" << peer.cca->inFlightCount() << "\n";
}
// iterate will now call NGC_FT1_send_data_cb
transfer.state = State::SENDING;
transfer.time_since_activity = 0.f;
@ -520,7 +567,7 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_init_ack& e) {
bool NGCFT1::onEvent(const Events::NGCEXT_ft1_data& e) {
#if !NDEBUG
std::cout << "NGCFT1: FT1_DATA\n";
//std::cout << "NGCFT1: got FT1_DATA " << e.sequence_id << "\n";
#endif
if (e.data.empty()) {
@ -555,7 +602,7 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_data& e) {
e.group_number, e.peer_number,
e.transfer_id,
transfer.file_size_current,
data.data(), data.size()
data.data(), static_cast<uint32_t>(data.size())
}
);
@ -568,13 +615,19 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_data& e) {
// TODO: check if this caps at max acks
if (!ack_seq_ids.empty()) {
// TODO: check return value
sendPKG_FT1_DATA_ACK(e.group_number, e.peer_number, e.transfer_id, ack_seq_ids.data(), ack_seq_ids.size());
_neep.send_ft1_data_ack(e.group_number, e.peer_number, e.transfer_id, ack_seq_ids.data(), ack_seq_ids.size());
}
if (transfer.file_size_current == transfer.file_size) {
// TODO: set all data received, and clean up
//transfer.state = Group::Peer::RecvTransfer::State::RECV;
// all data received
transfer.state = Group::Peer::RecvTransfer::State::FINISHING;
// TODO: keep around for remote timeout + delay + offset, so we can be sure all acks where received
// or implement a dedicated finished that needs to be acked
//transfer.finishing_timer = 0.75f; // TODO: we are receiving, we dont know delay
transfer.finishing_timer = FlowOnly::RTT_MAX;
dispatch(
NGCFT1_Event::recv_done,
Events::NGCFT1_recv_done{
@ -589,7 +642,7 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_data& e) {
bool NGCFT1::onEvent(const Events::NGCEXT_ft1_data_ack& e) {
#if !NDEBUG
//std::cout << "NGCFT1: FT1_DATA_ACK\n";
//std::cout << "NGCFT1: got FT1_DATA_ACK\n";
#endif
if (!groups.count(e.group_number)) {
@ -599,6 +652,8 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_data_ack& e) {
Group::Peer& peer = groups[e.group_number].peers[e.peer_number];
if (!peer.send_transfers[e.transfer_id].has_value()) {
// we delete directly, packets might still be in flight (in practice they are when ce)
// update: we no longer delete directly, but its kinda hacky
std::cerr << "NGCFT1 warning: data_ack for unknown transfer\n";
return true;
}
@ -625,7 +680,7 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_data_ack& e) {
// delete if all packets acked
if (transfer.file_size == transfer.file_size_current && transfer.ssb.size() == 0) {
std::cout << "NGCFT1: " << int(e.transfer_id) << " done\n";
std::cout << "NGCFT1: " << int(e.transfer_id) << " done. wnd:" << peer.cca->getWindow() << "\n";
dispatch(
NGCFT1_Event::send_done,
Events::NGCFT1_send_done{
@ -641,7 +696,7 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_data_ack& e) {
}
bool NGCFT1::onEvent(const Events::NGCEXT_ft1_message& e) {
std::cout << "NGCFT1: FT1_MESSAGE mid:" << e.message_id << " fk:" << e.file_kind << " [" << bin2hex(e.file_id) << "]\n";
std::cout << "NGCFT1: got FT1_MESSAGE mid:" << e.message_id << " fk:" << e.file_kind << " [" << bin2hex(e.file_id) << "]\n";
// .... just rethrow??
// TODO: dont
@ -651,11 +706,55 @@ bool NGCFT1::onEvent(const Events::NGCEXT_ft1_message& e) {
e.group_number, e.peer_number,
e.message_id,
static_cast<NGCFT1_file_kind>(e.file_kind),
e.file_id.data(), e.file_id.size()
e.file_id.data(), static_cast<uint32_t>(e.file_id.size())
}
);
}
bool NGCFT1::onEvent(const Events::NGCEXT_ft1_init2& e) {
//#if !NDEBUG
std::cout << "NGCFT1: got FT1_INIT2 fk:" << e.file_kind << " fs:" << e.file_size << " tid:" << int(e.transfer_id) << " ff:" << int(e.feature_flags) << " [" << bin2hex(e.file_id) << "]\n";
//#endif
bool accept = false;
dispatch(
NGCFT1_Event::recv_init,
Events::NGCFT1_recv_init{
e.group_number, e.peer_number,
static_cast<NGCFT1_file_kind>(e.file_kind),
e.file_id.data(), static_cast<uint32_t>(e.file_id.size()),
e.transfer_id,
e.file_size,
accept
}
);
if (!accept) {
std::cout << "NGCFT1: rejected init2\n";
return true; // return true?
}
_neep.send_ft1_init_ack(e.group_number, e.peer_number, e.transfer_id);
std::cout << "NGCFT1: accepted init2\n";
auto& peer = groups[e.group_number].peers[e.peer_number];
if (peer.recv_transfers[e.transfer_id].has_value()) {
std::cerr << "NGCFT1 warning: overwriting existing recv_transfer " << int(e.transfer_id) << ", other peer started new transfer on preexising\n";
}
peer.recv_transfers[e.transfer_id] = Group::Peer::RecvTransfer{
e.file_kind,
e.file_id,
Group::Peer::RecvTransfer::State::INITED,
e.file_size,
0u,
{} // rsb
};
return true;
}
bool NGCFT1::onToxEvent(const Tox_Event_Group_Peer_Exit* e) {
const auto group_number = tox_event_group_peer_exit_get_group_number(e);
const auto peer_number = tox_event_group_peer_exit_get_peer_id(e);
@ -711,7 +810,7 @@ bool NGCFT1::onToxEvent(const Tox_Event_Group_Peer_Exit* e) {
}
// reset cca
peer.cca = std::make_unique<CUBIC>(500-4); // TODO: replace with tox_group_max_custom_lossy_packet_length()-4
peer.cca.reset(); // dont actually reallocate
return false;
}

View File

@ -2,22 +2,23 @@
// solanaceae port of tox_ngc_ft1
#include <solanaceae/toxcore/tox_interface.hpp>
#include <solanaceae/toxcore/tox_event_interface.hpp>
#include <solanaceae/toxcore/tox_interface.hpp>
#include <solanaceae/ngc_ext/ngcext.hpp>
#include "./cubic.hpp"
//#include "./flow_only.hpp"
//#include "./ledbat.hpp"
#include "./cca.hpp"
#include "./rcv_buf.hpp"
#include "./snd_buf.hpp"
#include "./ngcft1_file_kind.hpp"
#include <cstdint>
#include <map>
#include <set>
#include <memory>
#include <random>
namespace Events {
@ -28,7 +29,7 @@ namespace Events {
NGCFT1_file_kind file_kind;
const uint8_t* file_id;
size_t file_id_size;
uint32_t file_id_size;
};
struct NGCFT1_recv_init {
@ -38,10 +39,10 @@ namespace Events {
NGCFT1_file_kind file_kind;
const uint8_t* file_id;
size_t file_id_size;
uint32_t file_id_size;
const uint8_t transfer_id;
const size_t file_size;
const uint64_t file_size;
// return true to accept, false to deny
bool& accept;
@ -53,9 +54,9 @@ namespace Events {
uint8_t transfer_id;
size_t data_offset;
uint64_t data_offset;
const uint8_t* data;
size_t data_size;
uint32_t data_size;
};
// request to fill data_size bytes into data
@ -65,9 +66,9 @@ namespace Events {
uint8_t transfer_id;
size_t data_offset;
uint64_t data_offset;
uint8_t* data;
size_t data_size;
uint32_t data_size;
};
struct NGCFT1_recv_done {
@ -95,7 +96,7 @@ namespace Events {
NGCFT1_file_kind file_kind;
const uint8_t* file_id;
size_t file_id_size;
uint32_t file_id_size;
};
} // Events
@ -131,17 +132,24 @@ using NGCFT1EventProviderI = EventProviderI<NGCFT1EventI>;
class NGCFT1 : public ToxEventI, public NGCEXTEventI, public NGCFT1EventProviderI {
ToxI& _t;
ToxEventProviderI& _tep;
NGCEXTEventProviderI& _neep;
ToxEventProviderI::SubscriptionReference _tep_sr;
NGCEXTEventProvider& _neep; // not the interface?
NGCEXTEventProvider::SubscriptionReference _neep_sr;
std::default_random_engine _rng{std::random_device{}()};
float _time_since_activity {10.f};
// TODO: config
size_t acks_per_packet {3u}; // 3
float init_retry_timeout_after {5.f}; // 10sec
float sending_give_up_after {30.f}; // 30sec
float init_retry_timeout_after {4.f};
float sending_give_up_after {10.f}; // sec (per active transfer)
struct Group {
struct Peer {
std::unique_ptr<CCAI> cca = std::make_unique<CUBIC>(500-4); // TODO: replace with tox_group_max_custom_lossy_packet_length()-4
uint32_t max_packet_data_size {500-4};
//std::unique_ptr<CCAI> cca = std::make_unique<CUBIC>(max_packet_data_size); // TODO: replace with tox_group_max_custom_lossy_packet_length()-4
std::unique_ptr<CCAI> cca;
struct RecvTransfer {
uint32_t file_kind;
@ -150,11 +158,14 @@ class NGCFT1 : public ToxEventI, public NGCEXTEventI, public NGCFT1EventProvider
enum class State {
INITED, //init acked, but no data received yet (might be dropped)
RECV, // receiving data
FINISHING, // got all the data, but we wait for 2*delay, since its likely there is data still arriving
} state;
// float time_since_last_activity ?
size_t file_size {0};
size_t file_size_current {0};
uint64_t file_size {0};
uint64_t file_size_current {0};
// if state FINISHING and it reaches 0, delete
float finishing_timer {0.f};
// sequence id based reassembly
RecvSequenceBuffer rsb;
@ -179,8 +190,8 @@ class NGCFT1 : public ToxEventI, public NGCEXTEventI, public NGCFT1EventProvider
size_t inits_sent {1}; // is sent when creating
float time_since_activity {0.f};
size_t file_size {0};
size_t file_size_current {0};
uint64_t file_size {0};
uint64_t file_size_current {0};
// sequence array
// list of sent but not acked seq_ids
@ -188,46 +199,44 @@ class NGCFT1 : public ToxEventI, public NGCEXTEventI, public NGCFT1EventProvider
};
std::array<std::optional<SendTransfer>, 256> send_transfers;
size_t next_send_transfer_idx {0}; // next id will be 0
size_t next_send_transfer_send_idx {0};
size_t active_send_transfers {0};
};
std::map<uint32_t, Peer> peers;
};
std::map<uint32_t, Group> groups;
protected:
bool sendPKG_FT1_REQUEST(uint32_t group_number, uint32_t peer_number, uint32_t file_kind, const uint8_t* file_id, size_t file_id_size);
bool sendPKG_FT1_INIT(uint32_t group_number, uint32_t peer_number, uint32_t file_kind, uint64_t file_size, uint8_t transfer_id, const uint8_t* file_id, size_t file_id_size);
bool sendPKG_FT1_INIT_ACK(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id);
bool sendPKG_FT1_DATA(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id, uint16_t sequence_id, const uint8_t* data, size_t data_size);
bool sendPKG_FT1_DATA_ACK(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id, const uint16_t* seq_ids, size_t seq_ids_size);
bool sendPKG_FT1_MESSAGE(uint32_t group_number, uint32_t message_id, uint32_t file_kind, const uint8_t* file_id, size_t file_id_size);
void updateSendTransfer(float time_delta, uint32_t group_number, uint32_t peer_number, Group::Peer& peer, size_t idx, std::set<CCAI::SeqIDType>& timeouts_set);
void updateSendTransfer(float time_delta, uint32_t group_number, uint32_t peer_number, Group::Peer& peer, size_t idx, std::set<CCAI::SeqIDType>& timeouts_set, int64_t& can_packet_size);
void iteratePeer(float time_delta, uint32_t group_number, uint32_t peer_number, Group::Peer& peer);
const CCAI* getPeerCCA(uint32_t group_number, uint32_t peer_number) const;
public:
NGCFT1(
ToxI& t,
ToxEventProviderI& tep,
NGCEXTEventProviderI& neep
NGCEXTEventProvider& neep
);
void iterate(float delta);
float iterate(float delta);
public: // ft1 api
// TODO: public variant?
void NGC_FT1_send_request_private(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
const uint8_t* file_id, uint32_t file_id_size
);
// public does not make sense here
bool NGC_FT1_send_init_private(
uint32_t group_number, uint32_t peer_number,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size,
size_t file_size,
uint8_t* transfer_id
const uint8_t* file_id, uint32_t file_id_size,
uint64_t file_size,
uint8_t* transfer_id,
bool can_compress = false // set this if you know the data is compressable (eg text)
);
// sends the message and fills in message_id
@ -235,9 +244,26 @@ class NGCFT1 : public ToxEventI, public NGCEXTEventI, public NGCFT1EventProvider
uint32_t group_number,
uint32_t& message_id,
uint32_t file_kind,
const uint8_t* file_id, size_t file_id_size
const uint8_t* file_id, uint32_t file_id_size
);
public: // cca stuff
// rtt/delay
// negative on error or no cca
float getPeerDelay(uint32_t group_number, uint32_t peer_number) const;
// belived possible current window
// negative on error or no cca
float getPeerWindow(uint32_t group_number, uint32_t peer_number) const;
// packets in flight
// returns -1 if error or no cca
int64_t getPeerInFlightPackets(uint32_t group_number, uint32_t peer_number) const;
// actual bytes in flight (aka window)
// returns -1 if error or no cca
int64_t getPeerInFlightBytes(uint32_t group_number, uint32_t peer_number) const;
protected:
bool onEvent(const Events::NGCEXT_ft1_request&) override;
bool onEvent(const Events::NGCEXT_ft1_init&) override;
@ -245,6 +271,7 @@ class NGCFT1 : public ToxEventI, public NGCEXTEventI, public NGCFT1EventProvider
bool onEvent(const Events::NGCEXT_ft1_data&) override;
bool onEvent(const Events::NGCEXT_ft1_data_ack&) override;
bool onEvent(const Events::NGCEXT_ft1_message&) override;
bool onEvent(const Events::NGCEXT_ft1_init2&) override;
protected:
bool onToxEvent(const Tox_Event_Group_Peer_Exit* e) override;

View File

@ -72,5 +72,23 @@ enum class NGCFT1_file_kind : uint32_t {
// id: sha256
// always of size 16KiB, except if last piece in file
TORRENT_V2_PIECE,
// https://gist.github.com/Green-Sky/440cd9817a7114786850eb4c62dc57c3
// id: ts start, ts end
// content:
// - ts start (do we need this? when this is part of the id?)
// - ts end (same)
// - list size
// - ppk
// - mid
// - ts
HS2_INFO_RANGE_TIME = 0x00000f00,
// TODO: half open ranges
// TODO: id based
// TODO: ppk based?
// id: ppk, mid, ts
HS2_SINGLE_MESSAGE,
// TODO: message pack
};

View File

@ -0,0 +1,248 @@
#include "./sha1_mapped_filesystem.hpp"
#include <solanaceae/object_store/meta_components.hpp>
#include <solanaceae/object_store/meta_components_file.hpp>
#include "../file_constructor.hpp"
#include "../ft1_sha1_info.hpp"
#include "../hash_utils.hpp"
#include "../components.hpp"
#include <solanaceae/util/utils.hpp>
#include <atomic>
#include <mutex>
#include <list>
#include <thread>
#include <iostream>
namespace Backends {
struct SHA1MappedFilesystem_InfoBuilderState {
std::atomic_bool info_builder_dirty {false};
std::mutex info_builder_queue_mutex;
using InfoBuilderEntry = std::function<void(void)>;
std::list<InfoBuilderEntry> info_builder_queue;
};
SHA1MappedFilesystem::SHA1MappedFilesystem(
ObjectStore2& os
) : StorageBackendI::StorageBackendI(os), _ibs(std::make_unique<SHA1MappedFilesystem_InfoBuilderState>()) {
}
SHA1MappedFilesystem::~SHA1MappedFilesystem(void) {
}
void SHA1MappedFilesystem::tick(void) {
if (_ibs->info_builder_dirty) {
std::lock_guard l{_ibs->info_builder_queue_mutex};
_ibs->info_builder_dirty = false; // set while holding lock
for (auto& it : _ibs->info_builder_queue) {
it();
}
_ibs->info_builder_queue.clear();
}
}
ObjectHandle SHA1MappedFilesystem::newObject(ByteSpan id) {
ObjectHandle o{_os.registry(), _os.registry().create()};
o.emplace<ObjComp::Ephemeral::Backend>(this);
o.emplace<ObjComp::ID>(std::vector<uint8_t>{id});
//o.emplace<ObjComp::Ephemeral::FilePath>(object_file_path.generic_u8string());
_os.throwEventConstruct(o);
return o;
}
void SHA1MappedFilesystem::newFromFile(std::string_view file_name, std::string_view file_path, std::function<void(ObjectHandle o)>&& cb) {
std::thread(std::move([
this,
ibs = _ibs.get(),
cb = std::move(cb),
file_name_ = std::string(file_name),
file_path_ = std::string(file_path)
]() mutable {
// 0. open and fail
std::unique_ptr<File2I> file_impl = construct_file2_rw_mapped(file_path_, -1);
if (!file_impl->isGood()) {
{
std::lock_guard l{ibs->info_builder_queue_mutex};
ibs->info_builder_queue.push_back([file_path_](){
// back on iterate thread
std::cerr << "SHA1MF error: failed opening file '" << file_path_ << "'!\n";
});
ibs->info_builder_dirty = true; // still in scope, set before mutex unlock
}
return;
}
// 1. build info by hashing all chunks
FT1InfoSHA1 sha1_info;
// build info
sha1_info.file_name = file_name_;
sha1_info.file_size = file_impl->_file_size; // TODO: remove the reliance on implementation details
sha1_info.chunk_size = chunkSizeFromFileSize(sha1_info.file_size);
{
// TOOD: remove
const uint32_t cs_low {32*1024};
const uint32_t cs_high {4*1024*1024};
assert(sha1_info.chunk_size >= cs_low);
assert(sha1_info.chunk_size <= cs_high);
}
{ // build chunks
// HACK: load file fully
// ... its only a hack if its not memory mapped, but reading in chunk_sized chunks is probably a good idea anyway
const auto file_data = file_impl->read(file_impl->_file_size, 0);
size_t i = 0;
for (; i + sha1_info.chunk_size < file_data.size; i += sha1_info.chunk_size) {
sha1_info.chunks.push_back(hash_sha1(file_data.ptr+i, sha1_info.chunk_size));
}
if (i < file_data.size) {
sha1_info.chunks.push_back(hash_sha1(file_data.ptr+i, file_data.size-i));
}
}
file_impl.reset();
std::lock_guard l{ibs->info_builder_queue_mutex};
ibs->info_builder_queue.push_back(std::move([
this,
file_name_,
file_path_,
sha1_info = std::move(sha1_info),
cb = std::move(cb)
]() mutable { //
// executed on iterate thread
// reopen, cant move, since std::function needs to be copy consturctable (meh)
std::unique_ptr<File2I> file_impl = construct_file2_rw_mapped(file_path_, sha1_info.file_size);
if (!file_impl->isGood()) {
std::cerr << "SHA1MF error: failed opening file '" << file_path_ << "'!\n";
return;
}
// 2. hash info
std::vector<uint8_t> sha1_info_data;
std::vector<uint8_t> sha1_info_hash;
std::cout << "SHA1MF info is: \n" << sha1_info;
sha1_info_data = sha1_info.toBuffer();
std::cout << "SHA1MF sha1_info size: " << sha1_info_data.size() << "\n";
sha1_info_hash = hash_sha1(sha1_info_data.data(), sha1_info_data.size());
std::cout << "SHA1MF sha1_info_hash: " << bin2hex(sha1_info_hash) << "\n";
ObjectHandle o;
// check if content exists
// TODO: store "info_to_content" in reg/backend, for better lookup speed
// rn ok, bc this is rare
for (const auto& [it_ov, it_ih] : _os.registry().view<Components::FT1InfoSHA1Hash>().each()) {
if (it_ih.hash == sha1_info_hash) {
o = {_os.registry(), it_ov};
}
}
if (static_cast<bool>(o)) {
// TODO: check if content is incomplete and use file instead
if (!o.all_of<Components::FT1InfoSHA1>()) {
o.emplace<Components::FT1InfoSHA1>(sha1_info);
}
if (!o.all_of<Components::FT1InfoSHA1Data>()) {
o.emplace<Components::FT1InfoSHA1Data>(sha1_info_data);
}
// hash has to be set already
// Components::FT1InfoSHA1Hash
// hmmm
// TODO: we need a replacement for this
o.remove<ObjComp::Ephemeral::File::TagTransferPaused>();
// we dont want the info anymore
o.remove<Components::ReRequestInfoTimer>();
} else {
o = newObject(ByteSpan{sha1_info_hash});
o.emplace<Components::FT1InfoSHA1>(sha1_info);
o.emplace<Components::FT1InfoSHA1Data>(sha1_info_data); // keep around? or file?
o.emplace<Components::FT1InfoSHA1Hash>(sha1_info_hash);
}
{ // lookup tables and have
auto& cc = o.get_or_emplace<Components::FT1ChunkSHA1Cache>();
// skip have vec, since all
cc.have_count = sha1_info.chunks.size(); // need?
cc.chunk_hash_to_index.clear(); // for cpy pst
for (size_t i = 0; i < sha1_info.chunks.size(); i++) {
cc.chunk_hash_to_index[sha1_info.chunks[i]].push_back(i);
}
}
o.emplace_or_replace<ObjComp::F::TagLocalHaveAll>();
o.remove<ObjComp::F::LocalHaveBitset>();
{ // file info
// TODO: not overwrite fi? since same?
o.emplace_or_replace<ObjComp::F::SingleInfo>(file_name_, file_impl->_file_size);
o.emplace_or_replace<ObjComp::F::SingleInfoLocal>(file_path_);
o.emplace_or_replace<ObjComp::Ephemeral::FilePath>(file_path_); // ?
}
o.emplace_or_replace<Components::FT1File2>(std::move(file_impl));
if (!o.all_of<ObjComp::Ephemeral::File::TransferStats>()) {
o.emplace<ObjComp::Ephemeral::File::TransferStats>();
}
cb(o);
// TODO: earlier?
_os.throwEventUpdate(o);
}));
ibs->info_builder_dirty = true; // still in scope, set before mutex unlock
})).detach();
}
std::unique_ptr<File2I> SHA1MappedFilesystem::file2(Object ov, FILE2_FLAGS flags) {
if (flags & FILE2_RAW) {
std::cerr << "SHA1MF error: does not support raw modes\n";
return nullptr;
}
ObjectHandle o{_os.registry(), ov};
if (!static_cast<bool>(o)) {
return nullptr;
}
// will this do if we go and support enc?
// use ObjComp::Ephemeral::FilePath instead??
if (!o.all_of<ObjComp::F::SingleInfoLocal>()) {
return nullptr;
}
const auto& file_path = o.get<ObjComp::F::SingleInfoLocal>().file_path;
if (file_path.empty()) {
return nullptr;
}
// TODO: read-only one too
// since they are mapped, is this efficent to have multiple?
auto res = construct_file2_rw_mapped(file_path, -1);
if (!res || !res->isGood()) {
std::cerr << "SHA1MF error: failed constructing mapped file '" << file_path << "'\n";
return nullptr;
}
return res;
}
} // Backends

View File

@ -0,0 +1,39 @@
#pragma once
#include <solanaceae/object_store/object_store.hpp>
#include <string>
#include <string_view>
#include <memory>
namespace Backends {
// fwd to hide the threading headers
struct SHA1MappedFilesystem_InfoBuilderState;
struct SHA1MappedFilesystem : public StorageBackendI {
std::unique_ptr<SHA1MappedFilesystem_InfoBuilderState> _ibs;
SHA1MappedFilesystem(
ObjectStore2& os
);
~SHA1MappedFilesystem(void);
// pull from info builder queue
// call from main thread (os thread?) often
void tick(void);
ObjectHandle newObject(ByteSpan id) override;
// performs async file hashing
// create message in cb
void newFromFile(std::string_view file_name, std::string_view file_path, std::function<void(ObjectHandle o)>&& cb/*, bool merge_preexisting = false*/);
// might return pre-existing?
ObjectHandle newFromInfoHash(ByteSpan info_hash);
std::unique_ptr<File2I> file2(Object o, FILE2_FLAGS flags) override;
};
} // Backends

View File

@ -0,0 +1,396 @@
#include "./chunk_picker.hpp"
#include <solanaceae/tox_contacts/components.hpp>
#include "./contact_components.hpp"
#include <solanaceae/object_store/meta_components_file.hpp>
#include "./components.hpp"
#include <algorithm>
#include <iostream>
// TODO: move ps to own file
// picker strategies are generators
// gen returns true if a valid chunk was picked
// ps should be light weight and no persistant state
// ps produce an index only once
// simply scans from the beginning, requesting chunks in that order
struct PickerStrategySequential {
const BitSet& chunk_candidates;
const size_t total_chunks;
size_t i {0u};
PickerStrategySequential(
const BitSet& chunk_candidates_,
const size_t total_chunks_,
const size_t start_offset_ = 0u
) :
chunk_candidates(chunk_candidates_),
total_chunks(total_chunks_),
i(start_offset_)
{}
bool gen(size_t& out_chunk_idx) {
for (; i < total_chunks && i < chunk_candidates.size_bits(); i++) {
if (chunk_candidates[i]) {
out_chunk_idx = i;
i++;
return true;
}
}
return false;
}
};
// chooses a random start position and then requests linearly from there
struct PickerStrategyRandom {
const BitSet& chunk_candidates;
const size_t total_chunks;
std::minstd_rand& rng;
size_t count {0u};
size_t i {rng()%total_chunks};
PickerStrategyRandom(
const BitSet& chunk_candidates_,
const size_t total_chunks_,
std::minstd_rand& rng_
) :
chunk_candidates(chunk_candidates_),
total_chunks(total_chunks_),
rng(rng_)
{}
bool gen(size_t& out_chunk_idx) {
for (; count < total_chunks; count++, i++) {
// wrap around
if (i >= total_chunks) {
i = i%total_chunks;
}
if (chunk_candidates[i]) {
out_chunk_idx = i;
count++;
i++;
return true;
}
}
return false;
}
};
// switches randomly between random and sequential
struct PickerStrategyRandomSequential {
PickerStrategyRandom psr;
PickerStrategySequential pssf;
// TODO: configurable
std::bernoulli_distribution d{0.5f};
PickerStrategyRandomSequential(
const BitSet& chunk_candidates_,
const size_t total_chunks_,
std::minstd_rand& rng_,
const size_t start_offset_ = 0u
) :
psr(chunk_candidates_, total_chunks_, rng_),
pssf(chunk_candidates_, total_chunks_, start_offset_)
{}
bool gen(size_t& out_chunk_idx) {
if (d(psr.rng)) {
return psr.gen(out_chunk_idx);
} else {
return pssf.gen(out_chunk_idx);
}
}
};
// TODO: return bytes instead, so it can be done chunk size independent
static constexpr size_t flowWindowToRequestCount(size_t flow_window) {
// based on 500KiB/s with ~0.05s delay looks fine
// increase to 4 at wnd >= 25*1024
if (flow_window >= 25*1024) {
return 4u;
}
return 3u;
}
void ChunkPicker::updateParticipation(
Contact3Handle c,
ObjectRegistry& objreg
) {
if (!c.all_of<Contact::Components::FT1Participation>()) {
participating_unfinished.clear();
return;
}
entt::dense_set<Object> checked;
for (const Object ov : c.get<Contact::Components::FT1Participation>().participating) {
using Priority = ObjComp::Ephemeral::File::DownloadPriority::Priority;
const ObjectHandle o {objreg, ov};
if (participating_unfinished.contains(o)) {
if (!o.all_of<Components::FT1ChunkSHA1Cache, Components::FT1InfoSHA1>()) {
participating_unfinished.erase(o);
continue;
}
if (o.all_of<ObjComp::Ephemeral::File::TagTransferPaused>()) {
participating_unfinished.erase(o);
continue;
}
if (o.all_of<ObjComp::F::TagLocalHaveAll>()) {
participating_unfinished.erase(o);
continue;
}
// TODO: optimize this to only change on dirty, or something
if (o.all_of<ObjComp::Ephemeral::File::DownloadPriority>()) {
Priority prio = o.get<ObjComp::Ephemeral::File::DownloadPriority>().p;
uint16_t pskips =
prio == Priority::HIGHEST ? 0u :
prio == Priority::HIGH ? 1u :
prio == Priority::NORMAL ? 2u :
prio == Priority::LOW ? 4u :
8u // LOWEST
;
participating_unfinished.at(o).should_skip = pskips;
}
} else {
if (!o.all_of<Components::FT1ChunkSHA1Cache, Components::FT1InfoSHA1>()) {
continue;
}
if (o.all_of<ObjComp::Ephemeral::File::TagTransferPaused>()) {
continue;
}
if (!o.all_of<ObjComp::F::TagLocalHaveAll>()) {
Priority prio = Priority::NORMAL;
if (o.all_of<ObjComp::Ephemeral::File::DownloadPriority>()) {
prio = o.get<ObjComp::Ephemeral::File::DownloadPriority>().p;
}
uint16_t pskips =
prio == Priority::HIGHEST ? 0u :
prio == Priority::HIGH ? 1u :
prio == Priority::NORMAL ? 2u :
prio == Priority::LOW ? 4u :
8u // LOWEST
;
participating_unfinished.emplace(o, ParticipationEntry{pskips});
}
}
checked.emplace(o);
}
// now we still need to remove left over unfinished.
// TODO: how did they get left over
entt::dense_set<Object> to_remove;
for (const auto& [o, _] : participating_unfinished) {
if (!checked.contains(o)) {
std::cerr << "unfinished contained non participating\n";
to_remove.emplace(o);
}
}
for (const auto& o : to_remove) {
participating_unfinished.erase(o);
}
}
std::vector<ChunkPicker::ContentChunkR> ChunkPicker::updateChunkRequests(
Contact3Handle c,
ObjectRegistry& objreg,
const ReceivingTransfers& rt,
const size_t open_requests
//const size_t flow_window
//NGCFT1& nft
) {
if (!static_cast<bool>(c)) {
assert(false); return {};
}
if (!c.all_of<Contact::Components::ToxGroupPeerEphemeral>()) {
assert(false); return {};
}
const auto [group_number, peer_number] = c.get<Contact::Components::ToxGroupPeerEphemeral>();
updateParticipation(c, objreg);
if (participating_unfinished.empty()) {
participating_in_last = entt::null;
return {};
}
std::vector<ContentChunkR> req_ret;
// count running tf and open requests
const size_t num_ongoing_transfers = rt.sizePeer(group_number, peer_number);
// TODO: account for open requests
const int64_t num_total = num_ongoing_transfers + open_requests;
// TODO: base max on rate(chunks per sec), gonna be ass with variable chunk size
//const size_t num_max = std::max(max_tf_chunk_requests, flowWindowToRequestCount(flow_window));
const size_t num_max = max_tf_chunk_requests;
const size_t num_requests = std::max<int64_t>(0, int64_t(num_max)-num_total);
std::cerr << "CP: want " << num_requests << "(rt:" << num_ongoing_transfers << " or:" << open_requests << ") from " << group_number << ":" << peer_number << "\n";
// while n < X
// round robin content (remember last obj)
if (!objreg.valid(participating_in_last) || !participating_unfinished.count(participating_in_last)) {
participating_in_last = participating_unfinished.begin()->first;
}
assert(objreg.valid(participating_in_last));
auto it = participating_unfinished.find(participating_in_last);
// hard limit robin rounds to array size time 20
for (size_t i = 0; req_ret.size() < num_requests && i < participating_unfinished.size()*20; i++, it++) {
if (it == participating_unfinished.end()) {
it = participating_unfinished.begin();
}
if (it->second.skips < it->second.should_skip) {
it->second.skips++;
continue;
}
it->second.skips = 0;
ObjectHandle o {objreg, it->first};
// intersect self have with other have
if (!o.all_of<Components::RemoteHaveBitset, Components::FT1ChunkSHA1Cache, Components::FT1InfoSHA1>()) {
// rare case where no one else has anything
continue;
}
if (o.all_of<ObjComp::F::TagLocalHaveAll>()) {
std::cerr << "ChunkPicker error: completed content still in participating_unfinished!\n";
continue;
}
//const auto& cc = o.get<Components::FT1ChunkSHA1Cache>();
const auto& others_have = o.get<Components::RemoteHaveBitset>().others;
auto other_it = others_have.find(c);
if (other_it == others_have.end()) {
// rare case where the other is participating but has nothing
continue;
}
const auto& other_have = other_it->second;
const auto& info = o.get<Components::FT1InfoSHA1>();
const auto total_chunks = info.chunks.size();
const auto* lhb = o.try_get<ObjComp::F::LocalHaveBitset>();
// if we dont have anything, this might not exist yet
BitSet chunk_candidates = lhb == nullptr ? BitSet{total_chunks} : (lhb->have.size_bits() >= total_chunks ? lhb->have : BitSet{total_chunks});
if (!other_have.have_all) {
// AND is the same as ~(~A | ~B)
// that means we leave chunk_candidates as (have is inverted want)
// merge is or
// invert at the end
chunk_candidates
.merge(other_have.have.invert())
.invert();
// TODO: add intersect for more perf
} else {
chunk_candidates.invert();
}
auto& requested_chunks = o.get_or_emplace<Components::FT1ChunkSHA1Requested>().chunks;
// TODO: trim off round up to 8, since they are now always set
// now select (globaly) unrequested other have
// TODO: how do we prioritize within a file?
// - sequential (walk from start (or readhead?))
// - random (choose random start pos and walk)
// - random/sequential (randomly choose between the 2)
// - rarest (keep track of rarity and sort by that)
// - steaming (use readhead to determain time critical chunks, potentially over requesting, first (relative to stream head) otherwise
// maybe look into libtorrens deadline stuff
// - arbitrary priority maps/functions (and combine with above in rations)
// TODO: configurable
size_t start_offset {0u};
if (o.all_of<ObjComp::Ephemeral::File::ReadHeadHint>()) {
const auto byte_offset = o.get<ObjComp::Ephemeral::File::ReadHeadHint>().offset_into_file;
if (byte_offset <= info.file_size) {
start_offset = byte_offset/info.chunk_size;
} else {
// error?
}
}
//PickerStrategySequential ps(chunk_candidates, total_chunks, start_offset);
//PickerStrategyRandom ps(chunk_candidates, total_chunks, _rng);
PickerStrategyRandomSequential ps(chunk_candidates, total_chunks, _rng, start_offset);
size_t out_chunk_idx {0};
size_t req_from_this_o {0};
while (ps.gen(out_chunk_idx) && req_ret.size() < num_requests && req_from_this_o < std::max<size_t>(total_chunks/3, 1)) {
// out_chunk_idx is a potential candidate we can request form peer
// - check against double requests
if (std::find_if(req_ret.cbegin(), req_ret.cend(), [&](const ContentChunkR& x) -> bool {
return x.object == o && x.chunk_index == out_chunk_idx;
}) != req_ret.cend()) {
// already in return array
// how did we get here? should we fast exit? if sequential strat, we would want to
continue; // skip
}
// - check against global requests (this might differ based on strat)
if (requested_chunks.count(out_chunk_idx) != 0) {
continue;
}
// - we check against globally running transfers (this might differ based on strat)
if (rt.containsChunk(o, out_chunk_idx)) {
continue;
}
// if nothing else blocks this, add to ret
req_ret.push_back(ContentChunkR{o, out_chunk_idx});
// TODO: move this after packet was sent successfully
// (move net in? hmm)
requested_chunks[out_chunk_idx] = Components::FT1ChunkSHA1Requested::Entry{0.f, c};
req_from_this_o++;
}
}
//if (it == participating_unfinished.end() || ++it == participating_unfinished.end()) {
if (it == participating_unfinished.end()) {
participating_in_last = entt::null;
} else {
participating_in_last = it->first;
}
if (req_ret.size() < num_requests) {
std::cerr << "CP: could not fulfil, " << group_number << ":" << peer_number << " only has " << req_ret.size() << " candidates\n";
}
// -- no -- (just compat with old code, ignore)
// if n < X
// optimistically request 1 chunk other does not have
// (don't mark es requested? or lower cooldown to re-request?)
return req_ret;
}

View File

@ -0,0 +1,77 @@
#pragma once
#include <solanaceae/contact/contact_model3.hpp>
#include <solanaceae/object_store/object_store.hpp>
#include "./components.hpp"
#include "./receiving_transfers.hpp"
#include <entt/container/dense_map.hpp>
#include <entt/container/dense_set.hpp>
#include <cstddef>
#include <cstdint>
#include <random>
//#include <solanaceae/ngc_ft1/ngcft1.hpp>
// goal is to always keep 2 transfers running and X(6) requests queued up
// per peer
struct ChunkPickerUpdateTag {};
struct ChunkPickerTimer {
// adds update tag on 0
float timer {0.f};
};
// contact component?
struct ChunkPicker {
// max transfers
static constexpr size_t max_tf_info_requests {1};
static constexpr size_t max_tf_chunk_requests {4}; // TODO: dynamic, function/factor of (window(delay*speed)/chunksize)
// TODO: cheaper init? tls rng for deep seeding?
std::minstd_rand _rng{std::random_device{}()};
// TODO: handle with hash utils?
struct ParticipationEntry {
ParticipationEntry(void) {}
ParticipationEntry(uint16_t s) : should_skip(s) {}
// skips in round robin -> lower should_skip => higher priority
// TODO: replace with enum value
uint16_t should_skip {2}; // 0 high, 8 low (double each time? 0,1,2,4,8)
uint16_t skips {0};
};
entt::dense_map<Object, ParticipationEntry> participating_unfinished;
Object participating_in_last {entt::null};
private: // TODO: properly sort
// updates participating_unfinished
void updateParticipation(
Contact3Handle c,
ObjectRegistry& objreg
);
public:
// ---------- tick ----------
//void sendInfoRequests();
// is this like a system?
struct ContentChunkR {
ObjectHandle object;
size_t chunk_index;
};
// returns list of chunks to request
[[nodiscard]] std::vector<ContentChunkR> updateChunkRequests(
Contact3Handle c,
ObjectRegistry& objreg,
const ReceivingTransfers& rt,
const size_t open_requests
//const size_t flow_window
//NGCFT1& nft
);
};

View File

@ -0,0 +1,127 @@
#include "./chunk_picker_systems.hpp"
#include <solanaceae/ngc_ft1/ngcft1_file_kind.hpp>
#include "./components.hpp"
#include "./chunk_picker.hpp"
#include "./contact_components.hpp"
#include <cassert>
#include <iostream>
namespace Systems {
void chunk_picker_updates(
Contact3Registry& cr,
ObjectRegistry& os_reg,
const entt::dense_map<Contact3, size_t>& peer_open_requests,
const ReceivingTransfers& receiving_transfers,
NGCFT1& nft, // TODO: remove this somehow
const float delta
) {
std::vector<Contact3Handle> cp_to_remove;
// first, update timers
cr.view<ChunkPickerTimer>().each([&cr, delta](const Contact3 cv, ChunkPickerTimer& cpt) {
cpt.timer -= delta;
if (cpt.timer <= 0.f) {
cr.emplace_or_replace<ChunkPickerUpdateTag>(cv);
}
});
//std::cout << "number of chunkpickers: " << _cr.storage<ChunkPicker>().size() << ", of which " << _cr.storage<ChunkPickerUpdateTag>().size() << " need updating\n";
// now check for potentially missing cp
auto cput_view = cr.view<ChunkPickerUpdateTag>();
cput_view.each([&cr, &cp_to_remove](const Contact3 cv) {
Contact3Handle c{cr, cv};
//std::cout << "cput :)\n";
if (!c.all_of<Contact::Components::ToxGroupPeerEphemeral, Contact::Components::FT1Participation>()) {
std::cout << "cput uh nuh :(\n";
cp_to_remove.push_back(c);
return;
}
if (!c.all_of<ChunkPicker>()) {
std::cout << "creating new cp!!\n";
c.emplace<ChunkPicker>();
c.emplace_or_replace<ChunkPickerTimer>();
}
});
// now update all cp that are tagged
cr.view<ChunkPicker, ChunkPickerUpdateTag>().each([&cr, &os_reg, &peer_open_requests, &receiving_transfers, &nft, &cp_to_remove](const Contact3 cv, ChunkPicker& cp) {
Contact3Handle c{cr, cv};
if (!c.all_of<Contact::Components::ToxGroupPeerEphemeral, Contact::Components::FT1Participation>()) {
cp_to_remove.push_back(c);
return;
}
//std::cout << "cpu :)\n";
// HACK: expensive, dont do every tick, only on events
// do verification in debug instead?
//cp.validateParticipation(c, _os.registry());
size_t peer_open_request = 0;
if (peer_open_requests.contains(c)) {
peer_open_request += peer_open_requests.at(c);
}
auto new_requests = cp.updateChunkRequests(
c,
os_reg,
receiving_transfers,
peer_open_request
);
if (new_requests.empty()) {
// updateChunkRequests updates the unfinished
// TODO: pull out and check there?
if (cp.participating_unfinished.empty()) {
std::cout << "destroying empty useless cp\n";
cp_to_remove.push_back(c);
} else {
// most likely will have something soon
// TODO: mark dirty on have instead?
c.get_or_emplace<ChunkPickerTimer>().timer = 10.f;
}
return;
}
assert(c.all_of<Contact::Components::ToxGroupPeerEphemeral>());
const auto [group_number, peer_number] = c.get<Contact::Components::ToxGroupPeerEphemeral>();
for (const auto [r_o, r_idx] : new_requests) {
auto& cc = r_o.get<Components::FT1ChunkSHA1Cache>();
const auto& info = r_o.get<Components::FT1InfoSHA1>();
// request chunk_idx
nft.NGC_FT1_send_request_private(
group_number, peer_number,
static_cast<uint32_t>(NGCFT1_file_kind::HASH_SHA1_CHUNK),
info.chunks.at(r_idx).data.data(), info.chunks.at(r_idx).size()
);
std::cout << "SHA1_NGCFT1: requesting chunk [" << info.chunks.at(r_idx) << "] from " << group_number << ":" << peer_number << "\n";
}
// force update every minute
// TODO: add small random bias to spread load
c.get_or_emplace<ChunkPickerTimer>().timer = 60.f;
});
// unmark all marked
cr.clear<ChunkPickerUpdateTag>();
assert(cr.storage<ChunkPickerUpdateTag>().empty());
for (const auto& c : cp_to_remove) {
c.remove<ChunkPicker, ChunkPickerTimer>();
}
}
} // Systems

View File

@ -0,0 +1,22 @@
#pragma once
#include <solanaceae/contact/contact_model3.hpp>
#include <solanaceae/object_store/object_store.hpp>
#include <solanaceae/tox_contacts/components.hpp>
#include <solanaceae/ngc_ft1/ngcft1.hpp>
#include "./receiving_transfers.hpp"
namespace Systems {
void chunk_picker_updates(
Contact3Registry& cr,
ObjectRegistry& os_reg,
const entt::dense_map<Contact3, size_t>& peer_open_requests,
const ReceivingTransfers& receiving_transfers,
NGCFT1& nft, // TODO: remove this somehow
const float delta
);
} // Systems

View File

@ -0,0 +1,68 @@
#include "./components.hpp"
#include <solanaceae/object_store/meta_components_file.hpp>
namespace Components {
std::vector<size_t> FT1ChunkSHA1Cache::chunkIndices(const SHA1Digest& hash) const {
const auto it = chunk_hash_to_index.find(hash);
if (it != chunk_hash_to_index.cend()) {
return it->second;
} else {
return {};
}
}
bool FT1ChunkSHA1Cache::haveChunk(ObjectHandle o, const SHA1Digest& hash) const {
if (o.all_of<ObjComp::F::TagLocalHaveAll>()) {
return true;
}
const auto* lhb = o.try_get<ObjComp::F::LocalHaveBitset>();
if (lhb == nullptr) {
return false; // we dont have anything yet
}
if (auto i_vec = chunkIndices(hash); !i_vec.empty()) {
// TODO: should i test all?
//return have_chunk[i_vec.front()];
return lhb->have[i_vec.front()];
}
// not part of this file
return false;
}
void ReAnnounceTimer::set(const float new_timer) {
timer = new_timer;
last_max = new_timer;
}
void ReAnnounceTimer::reset(void) {
if (last_max <= 0.01f) {
last_max = 1.f;
}
last_max *= 2.f;
timer = last_max;
}
void ReAnnounceTimer::lower(void) {
timer *= 0.1f;
last_max *= 0.1f;
}
void TransferStatsTally::Peer::trimSent(const float time_now) {
while (recently_sent.size() > 4 && time_now - recently_sent.front().time_point > 1.f) {
recently_sent.pop_front();
}
}
void TransferStatsTally::Peer::trimReceived(const float time_now) {
while (recently_received.size() > 4 && time_now - recently_received.front().time_point > 1.f) {
recently_received.pop_front();
}
}
} // Components

View File

@ -0,0 +1,126 @@
#pragma once
#include <solanaceae/contact/components.hpp>
#include <solanaceae/message3/components.hpp>
#include <solanaceae/message3/registry_message_model.hpp>
#include <solanaceae/object_store/meta_components_file.hpp>
#include <solanaceae/util/bitset.hpp>
#include <entt/container/dense_set.hpp>
#include <entt/container/dense_map.hpp>
#include "./ft1_sha1_info.hpp"
#include "./hash_utils.hpp"
#include <vector>
#include <deque>
// TODO: rename to object components
namespace Components {
struct Messages {
// dense set instead?
std::vector<Message3Handle> messages;
};
using FT1InfoSHA1 = FT1InfoSHA1;
struct FT1InfoSHA1Data {
std::vector<uint8_t> data;
};
struct FT1InfoSHA1Hash {
std::vector<uint8_t> hash;
};
struct FT1ChunkSHA1Cache {
// TODO: extract have_count to generic comp
// have_chunk is the size of info.chunks.size(), or empty if have_all
// keep in mind bitset rounds up to 8s
//BitSet have_chunk{0};
//bool have_all {false};
size_t have_count {0}; // move?
entt::dense_map<SHA1Digest, std::vector<size_t>> chunk_hash_to_index;
std::vector<size_t> chunkIndices(const SHA1Digest& hash) const;
bool haveChunk(ObjectHandle o, const SHA1Digest& hash) const;
};
struct FT1File2 {
// the cached file2 for faster access
// should be destroyed when no activity and recreated on demand
std::unique_ptr<File2I> file;
};
struct FT1ChunkSHA1Requested {
// requested chunks with a timer since last request
struct Entry {
float timer {0.f};
Contact3 c {entt::null};
};
entt::dense_map<size_t, Entry> chunks;
};
// TODO: once announce is shipped, remove the "Suspected"
struct SuspectedParticipants {
entt::dense_set<Contact3> participants;
};
struct RemoteHaveBitset {
struct Entry {
bool have_all {false};
BitSet have;
};
entt::dense_map<Contact3, Entry> others;
};
struct ReRequestInfoTimer {
float timer {0.f};
};
struct AnnounceTargets {
entt::dense_set<Contact3> targets;
};
struct ReAnnounceTimer {
float timer {0.f};
float last_max {0.f};
void set(const float new_timer);
// exponential back-off
void reset(void);
// on peer join to group
void lower(void);
};
struct TransferStatsSeparated {
entt::dense_map<Contact3, ObjComp::Ephemeral::File::TransferStats> stats;
};
// used to populate stats
struct TransferStatsTally {
struct Peer {
struct Entry {
float time_point {0.f};
uint64_t bytes {0u};
bool accounted {false};
};
std::deque<Entry> recently_sent;
std::deque<Entry> recently_received;
// keep atleast 4 or 1sec
// trim too old front
void trimSent(const float time_now);
void trimReceived(const float time_now);
};
entt::dense_map<Contact3, Peer> tally;
};
} // Components

View File

@ -0,0 +1,13 @@
#pragma once
#include <solanaceae/object_store/object_store.hpp>
#include <entt/container/dense_set.hpp>
namespace Contact::Components {
struct FT1Participation {
entt::dense_set<Object> participating;
};
} // Contact::Components

View File

@ -0,0 +1,8 @@
#include "./file_constructor.hpp"
#include "./file_rw_mapped.hpp"
std::unique_ptr<File2I> construct_file2_rw_mapped(std::string_view file_path, int64_t file_size) {
return std::make_unique<File2RWMapped>(file_path, file_size);
}

View File

@ -0,0 +1,9 @@
#pragma once
#include <solanaceae/file/file2.hpp>
#include <memory>
#include <string_view>
std::unique_ptr<File2I> construct_file2_rw_mapped(std::string_view file_path, int64_t file_size = -1);

View File

@ -1,58 +1,83 @@
#pragma once
#include <solanaceae/message3/file.hpp>
#include <solanaceae/file/file2.hpp>
#include "./mio.hpp"
#include <filesystem>
#include <fstream>
#include <iostream>
#include <cstring>
#include <cassert>
struct FileRWMapped : public FileI {
struct File2RWMapped : public File2I {
mio::ummap_sink _file_map;
// TODO: add truncate support?
FileRWMapped(std::string_view file_path, uint64_t file_size) {
_file_size = file_size;
// TODO: rw always true?
File2RWMapped(std::string_view file_path, int64_t file_size = -1) : File2I(true, true) {
std::filesystem::path native_file_path{file_path};
if (!std::filesystem::exists(file_path)) {
std::ofstream(std::string{file_path}) << '\0'; // force create the file
if (!std::filesystem::exists(native_file_path)) {
std::ofstream(native_file_path) << '\0'; // force create the file
}
_file_size = std::filesystem::file_size(native_file_path);
if (file_size >= 0 && _file_size != file_size) {
_file_size = file_size;
std::filesystem::resize_file(native_file_path, file_size); // ensure size, usually sparse
}
std::filesystem::resize_file(file_path, file_size); // ensure size, usually sparse
std::error_code err;
// sink, is also read
_file_map.map(std::string{file_path}, 0, file_size, err);
_file_map.map(native_file_path.u8string(), 0, _file_size, err);
if (err) {
// TODO: errro
std::cerr << "FileRWMapped error: mapping file failed " << err << "\n";
return;
}
}
virtual ~FileRWMapped(void) override {}
virtual ~File2RWMapped(void) override {}
bool isGood(void) override {
return _file_map.is_mapped();
}
std::vector<uint8_t> read(uint64_t pos, uint64_t size) override {
if (pos+size > _file_size) {
//assert(false && "read past end");
return {};
}
return {_file_map.data()+pos, _file_map.data()+(pos+size)};
}
bool write(uint64_t pos, const std::vector<uint8_t>& data) override {
if (pos+data.size() > _file_size) {
bool write(const ByteSpan data, int64_t pos = -1) override {
// TODO: support streaming write
if (pos < 0) {
return false;
}
std::memcpy(_file_map.data()+pos, data.data(), data.size());
if (data.empty()) {
return true; // false?
}
// file size is fix for mmaped files
if (pos+data.size > _file_size) {
return false;
}
std::memcpy(_file_map.data()+pos, data.ptr, data.size);
return true;
}
ByteSpanWithOwnership read(uint64_t size, int64_t pos = -1) override {
// TODO: support streaming read
if (pos < 0) {
assert(false && "streaming not implemented");
return ByteSpan{};
}
if (pos+size > _file_size) {
assert(false && "read past end");
return ByteSpan{};
}
// return non-owning
return ByteSpan{_file_map.data()+pos, size};
}
};

View File

@ -1,5 +1,8 @@
#include "./ft1_sha1_info.hpp"
// next power of two
#include <entt/core/memory.hpp>
#include <sodium.h>
SHA1Digest::SHA1Digest(const std::vector<uint8_t>& v) {
@ -28,6 +31,27 @@ std::ostream& operator<<(std::ostream& out, const SHA1Digest& v) {
return out;
}
uint32_t chunkSizeFromFileSize(uint64_t file_size) {
const uint64_t fs_low {UINT64_C(512)*1024};
const uint64_t fs_high {UINT64_C(2)*1024*1024*1024};
const uint32_t cs_low {32*1024};
const uint32_t cs_high {4*1024*1024};
if (file_size <= fs_low) { // 512kib
return cs_low; // 32kib
} else if (file_size >= fs_high) { // 2gib
return cs_high; // 4mib
}
double t = file_size - fs_low;
t /= fs_high;
double x = (1 - t) * cs_low + t * cs_high;
return entt::next_power_of_two(uint64_t(x));
}
size_t FT1InfoSHA1::chunkSize(size_t chunk_index) const {
if (chunk_index+1 == chunks.size()) {
// last chunk

View File

@ -18,28 +18,30 @@ struct SHA1Digest {
bool operator==(const SHA1Digest& other) const { return data == other.data; }
bool operator!=(const SHA1Digest& other) const { return data != other.data; }
size_t size(void) const { return data.size(); }
constexpr size_t size(void) const { return data.size(); }
};
std::ostream& operator<<(std::ostream& out, const SHA1Digest& v);
namespace std { // inject
template<> struct hash<SHA1Digest> {
std::size_t operator()(const SHA1Digest& h) const noexcept {
std::uint64_t operator()(const SHA1Digest& h) const noexcept {
return
size_t(h.data[0]) << (0*8) |
size_t(h.data[1]) << (1*8) |
size_t(h.data[2]) << (2*8) |
size_t(h.data[3]) << (3*8) |
size_t(h.data[4]) << (4*8) |
size_t(h.data[5]) << (5*8) |
size_t(h.data[6]) << (6*8) |
size_t(h.data[7]) << (7*8)
std::uint64_t(h.data[0]) << (0*8) |
std::uint64_t(h.data[1]) << (1*8) |
std::uint64_t(h.data[2]) << (2*8) |
std::uint64_t(h.data[3]) << (3*8) |
std::uint64_t(h.data[4]) << (4*8) |
std::uint64_t(h.data[5]) << (5*8) |
std::uint64_t(h.data[6]) << (6*8) |
std::uint64_t(h.data[7]) << (7*8)
;
}
};
} // std
uint32_t chunkSizeFromFileSize(uint64_t file_size);
struct FT1InfoSHA1 {
std::string file_name;
uint64_t file_size {0};

View File

@ -0,0 +1,48 @@
#include "./participation.hpp"
#include "./contact_components.hpp"
#include "./chunk_picker.hpp"
#include <iostream>
bool addParticipation(Contact3Handle c, ObjectHandle o) {
bool was_new {false};
assert(static_cast<bool>(o));
assert(static_cast<bool>(c));
if (static_cast<bool>(o)) {
const auto [_, inserted] = o.get_or_emplace<Components::SuspectedParticipants>().participants.emplace(c);
was_new = inserted;
}
if (static_cast<bool>(c)) {
const auto [_, inserted] = c.get_or_emplace<Contact::Components::FT1Participation>().participating.emplace(o);
was_new = was_new || inserted;
}
//std::cout << "added " << (was_new?"new ":"") << "participant\n";
return was_new;
}
void removeParticipation(Contact3Handle c, ObjectHandle o) {
assert(static_cast<bool>(o));
assert(static_cast<bool>(c));
if (static_cast<bool>(o) && o.all_of<Components::SuspectedParticipants>()) {
o.get<Components::SuspectedParticipants>().participants.erase(c);
}
if (static_cast<bool>(c)) {
if (c.all_of<Contact::Components::FT1Participation>()) {
c.get<Contact::Components::FT1Participation>().participating.erase(o);
}
if (c.all_of<ChunkPicker>()) {
c.get<ChunkPicker>().participating_unfinished.erase(o);
}
}
//std::cout << "removed participant\n";
}

View File

@ -0,0 +1,8 @@
#pragma once
#include <solanaceae/object_store/object_store.hpp>
#include <solanaceae/contact/contact_model3.hpp>
bool addParticipation(Contact3Handle c, ObjectHandle o);
void removeParticipation(Contact3Handle c, ObjectHandle o);

View File

@ -0,0 +1,84 @@
#include "./re_announce_systems.hpp"
#include "./components.hpp"
#include <solanaceae/object_store/meta_components_file.hpp>
#include <solanaceae/tox_contacts/components.hpp>
#include <solanaceae/ngc_ft1/ngcft1_file_kind.hpp>
#include <vector>
#include <cassert>
namespace Systems {
void re_announce(
ObjectRegistry& os_reg,
Contact3Registry& cr,
NGCEXTEventProvider& neep,
const float delta
) {
std::vector<Object> to_remove;
os_reg.view<Components::ReAnnounceTimer>().each([&os_reg, &cr, &neep, &to_remove, delta](Object ov, Components::ReAnnounceTimer& rat) {
ObjectHandle o{os_reg, ov};
// TODO: pause
//// if paused -> remove
//if (o.all_of<Message::Components::Transfer::TagPaused>()) {
// to_remove.push_back(ov);
// return;
//}
// if not downloading or info incomplete -> remove
if (!o.all_of<Components::FT1ChunkSHA1Cache, Components::FT1InfoSHA1Hash, Components::AnnounceTargets>()) {
to_remove.push_back(ov);
assert(false && "transfer in broken state");
return;
}
if (o.all_of<ObjComp::F::TagLocalHaveAll>()) {
// transfer done, we stop announcing
to_remove.push_back(ov);
return;
}
// update all timers
rat.timer -= delta;
// send announces
if (rat.timer <= 0.f) {
rat.reset(); // exponential back-off
std::vector<uint8_t> announce_id;
const uint32_t file_kind = static_cast<uint32_t>(NGCFT1_file_kind::HASH_SHA1_INFO);
for (size_t i = 0; i < sizeof(file_kind); i++) {
announce_id.push_back((file_kind>>(i*8)) & 0xff);
}
assert(o.all_of<Components::FT1InfoSHA1Hash>());
const auto& info_hash = o.get<Components::FT1InfoSHA1Hash>().hash;
announce_id.insert(announce_id.cend(), info_hash.cbegin(), info_hash.cend());
for (const auto cv : o.get<Components::AnnounceTargets>().targets) {
if (cr.all_of<Contact::Components::ToxGroupPeerEphemeral>(cv)) {
// private ?
const auto [group_number, peer_number] = cr.get<Contact::Components::ToxGroupPeerEphemeral>(cv);
neep.send_pc1_announce(group_number, peer_number, announce_id.data(), announce_id.size());
} else if (cr.all_of<Contact::Components::ToxGroupEphemeral>(cv)) {
// public
const auto group_number = cr.get<Contact::Components::ToxGroupEphemeral>(cv).group_number;
neep.send_all_pc1_announce(group_number, announce_id.data(), announce_id.size());
} else {
assert(false && "we dont know how to announce to this target");
}
}
}
});
for (const auto ov : to_remove) {
os_reg.remove<Components::ReAnnounceTimer>(ov);
// we keep the annouce target list around (if it exists)
// TODO: should we make the target list more generic?
}
// TODO: how to handle unpause?
}
} // Systems

View File

@ -0,0 +1,17 @@
#pragma once
#include <solanaceae/object_store/object_store.hpp>
#include <solanaceae/contact/contact_model3.hpp>
#include <solanaceae/ngc_ext/ngcext.hpp>
namespace Systems {
void re_announce(
ObjectRegistry& os_reg,
Contact3Registry& cr,
NGCEXTEventProvider& neep,
const float delta
);
} // Systems

View File

@ -0,0 +1,131 @@
#include "./receiving_transfers.hpp"
#include <iostream>
void ReceivingTransfers::tick(float delta) {
for (auto peer_it = _data.begin(); peer_it != _data.end();) {
for (auto it = peer_it->second.begin(); it != peer_it->second.end();) {
it->second.time_since_activity += delta;
// if we have not heard for 20sec, timeout
if (it->second.time_since_activity >= 20.f) {
std::cerr << "SHA1_NGCFT1 warning: receiving tansfer timed out " << "." << int(it->first) << "\n";
// TODO: if info, requeue? or just keep the timer comp? - no, timer comp will continue ticking, even if loading
//it->second.v
it = peer_it->second.erase(it);
} else {
it++;
}
}
if (peer_it->second.empty()) {
// cleanup unused peers too agressive?
peer_it = _data.erase(peer_it);
} else {
peer_it++;
}
}
}
ReceivingTransfers::Entry& ReceivingTransfers::emplaceInfo(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id, const Entry::Info& info) {
auto& ent = _data[combine_ids(group_number, peer_number)][transfer_id];
ent.v = info;
return ent;
}
ReceivingTransfers::Entry& ReceivingTransfers::emplaceChunk(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id, const Entry::Chunk& chunk) {
assert(!chunk.chunk_indices.empty());
assert(!containsPeerChunk(group_number, peer_number, chunk.content, chunk.chunk_indices.front()));
auto& ent = _data[combine_ids(group_number, peer_number)][transfer_id];
ent.v = chunk;
return ent;
}
bool ReceivingTransfers::containsPeerTransfer(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id) const {
auto it = _data.find(combine_ids(group_number, peer_number));
if (it == _data.end()) {
return false;
}
return it->second.count(transfer_id);
}
bool ReceivingTransfers::containsChunk(ObjectHandle o, size_t chunk_idx) const {
for (const auto& [_, p] : _data) {
for (const auto& [_2, v] : p) {
if (!v.isChunk()) {
continue;
}
const auto& c = v.getChunk();
if (c.content != o) {
continue;
}
for (const auto idx : c.chunk_indices) {
if (idx == chunk_idx) {
return true;
}
}
}
}
return false;
}
bool ReceivingTransfers::containsPeerChunk(uint32_t group_number, uint32_t peer_number, ObjectHandle o, size_t chunk_idx) const {
auto it = _data.find(combine_ids(group_number, peer_number));
if (it == _data.end()) {
return false;
}
for (const auto& [_, v] : it->second) {
if (!v.isChunk()) {
continue;
}
const auto& c = v.getChunk();
if (c.content != o) {
continue;
}
for (const auto idx : c.chunk_indices) {
if (idx == chunk_idx) {
return true;
}
}
}
return false;
}
void ReceivingTransfers::removePeer(uint32_t group_number, uint32_t peer_number) {
_data.erase(combine_ids(group_number, peer_number));
}
void ReceivingTransfers::removePeerTransfer(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id) {
auto it = _data.find(combine_ids(group_number, peer_number));
if (it == _data.end()) {
return;
}
it->second.erase(transfer_id);
}
size_t ReceivingTransfers::size(void) const {
size_t count {0};
for (const auto& [_, p] : _data) {
count += p.size();
}
return count;
}
size_t ReceivingTransfers::sizePeer(uint32_t group_number, uint32_t peer_number) const {
auto it = _data.find(combine_ids(group_number, peer_number));
if (it == _data.end()) {
return 0;
}
return it->second.size();
}

View File

@ -0,0 +1,66 @@
#pragma once
#include <solanaceae/object_store/object_store.hpp>
#include <entt/container/dense_map.hpp>
#include "./util.hpp"
#include <cstdint>
#include <variant>
#include <vector>
struct ReceivingTransfers {
struct Entry {
struct Info {
ObjectHandle content;
// copy of info data
// too large?
std::vector<uint8_t> info_data;
};
struct Chunk {
ObjectHandle content;
std::vector<size_t> chunk_indices;
// or data?
// if memmapped, this would be just a pointer
};
std::variant<Info, Chunk> v;
float time_since_activity {0.f};
bool isInfo(void) const { return std::holds_alternative<Info>(v); }
bool isChunk(void) const { return std::holds_alternative<Chunk>(v); }
Info& getInfo(void) { return std::get<Info>(v); }
const Info& getInfo(void) const { return std::get<Info>(v); }
Chunk& getChunk(void) { return std::get<Chunk>(v); }
const Chunk& getChunk(void) const { return std::get<Chunk>(v); }
};
// key is groupid + peerid
// TODO: replace with contact
//using ReceivingTransfers = entt::dense_map<uint64_t, entt::dense_map<uint8_t, ReceivingTransferE>>;
entt::dense_map<uint64_t, entt::dense_map<uint8_t, Entry>> _data;
void tick(float delta);
Entry& emplaceInfo(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id, const Entry::Info& info);
Entry& emplaceChunk(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id, const Entry::Chunk& chunk);
bool containsPeer(uint32_t group_number, uint32_t peer_number) const { return _data.count(combine_ids(group_number, peer_number)); }
bool containsPeerTransfer(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id) const;
bool containsChunk(ObjectHandle o, size_t chunk_idx) const;
bool containsPeerChunk(uint32_t group_number, uint32_t peer_number, ObjectHandle o, size_t chunk_idx) const;
auto& getPeer(uint32_t group_number, uint32_t peer_number) { return _data.at(combine_ids(group_number, peer_number)); }
auto& getTransfer(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id) { return getPeer(group_number, peer_number).at(transfer_id); }
void removePeer(uint32_t group_number, uint32_t peer_number);
void removePeerTransfer(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id);
size_t size(void) const;
size_t sizePeer(uint32_t group_number, uint32_t peer_number) const;
};

View File

@ -0,0 +1,128 @@
#include "./sending_transfers.hpp"
#include <iostream>
#include <cassert>
void SendingTransfers::tick(float delta) {
for (auto peer_it = _data.begin(); peer_it != _data.end();) {
for (auto it = peer_it->second.begin(); it != peer_it->second.end();) {
it->second.time_since_activity += delta;
// if we have not heard for 10min, timeout (lower level event on real timeout)
// (2min was too little, so it seems)
// TODO: do we really need this if we get events?
// FIXME: disabled for now, we are trusting ngcft1 for now
if (false && it->second.time_since_activity >= 60.f*10.f) {
std::cerr << "SHA1_NGCFT1 warning: sending tansfer timed out " << "." << int(it->first) << "\n";
assert(false);
it = peer_it->second.erase(it);
} else {
it++;
}
}
if (peer_it->second.empty()) {
// cleanup unused peers too agressive?
peer_it = _data.erase(peer_it);
} else {
peer_it++;
}
}
}
SendingTransfers::Entry& SendingTransfers::emplaceInfo(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id, const Entry::Info& info) {
auto& ent = _data[combine_ids(group_number, peer_number)][transfer_id];
ent.v = info;
return ent;
}
SendingTransfers::Entry& SendingTransfers::emplaceChunk(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id, const Entry::Chunk& chunk) {
assert(!containsPeerChunk(group_number, peer_number, chunk.content, chunk.chunk_index));
auto& ent = _data[combine_ids(group_number, peer_number)][transfer_id];
ent.v = chunk;
return ent;
}
bool SendingTransfers::containsPeerTransfer(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id) const {
auto it = _data.find(combine_ids(group_number, peer_number));
if (it == _data.end()) {
return false;
}
return it->second.count(transfer_id);
}
bool SendingTransfers::containsChunk(ObjectHandle o, size_t chunk_idx) const {
for (const auto& [_, p] : _data) {
for (const auto& [_2, v] : p) {
if (!v.isChunk()) {
continue;
}
const auto& c = v.getChunk();
if (c.content != o) {
continue;
}
if (c.chunk_index == chunk_idx) {
return true;
}
}
}
return false;
}
bool SendingTransfers::containsPeerChunk(uint32_t group_number, uint32_t peer_number, ObjectHandle o, size_t chunk_idx) const {
auto it = _data.find(combine_ids(group_number, peer_number));
if (it == _data.end()) {
return false;
}
for (const auto& [_, v] : it->second) {
if (!v.isChunk()) {
continue;
}
const auto& c = v.getChunk();
if (c.content != o) {
continue;
}
if (c.chunk_index == chunk_idx) {
return true;
}
}
return false;
}
void SendingTransfers::removePeer(uint32_t group_number, uint32_t peer_number) {
_data.erase(combine_ids(group_number, peer_number));
}
void SendingTransfers::removePeerTransfer(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id) {
auto it = _data.find(combine_ids(group_number, peer_number));
if (it == _data.end()) {
return;
}
it->second.erase(transfer_id);
}
size_t SendingTransfers::size(void) const {
size_t count {0};
for (const auto& [_, p] : _data) {
count += p.size();
}
return count;
}
size_t SendingTransfers::sizePeer(uint32_t group_number, uint32_t peer_number) const {
auto it = _data.find(combine_ids(group_number, peer_number));
if (it == _data.end()) {
return 0;
}
return it->second.size();
}

View File

@ -0,0 +1,67 @@
#pragma once
#include <solanaceae/object_store/object_store.hpp>
#include <entt/container/dense_map.hpp>
#include "./util.hpp"
#include <cstdint>
#include <variant>
#include <vector>
struct SendingTransfers {
struct Entry {
struct Info {
// copy of info data
// too large?
std::vector<uint8_t> info_data;
};
struct Chunk {
ObjectHandle content;
size_t chunk_index; // <.< remove offset_into_file
//uint64_t offset_into_file;
// or data?
// if memmapped, this would be just a pointer
};
std::variant<Info, Chunk> v;
float time_since_activity {0.f};
bool isInfo(void) const { return std::holds_alternative<Info>(v); }
bool isChunk(void) const { return std::holds_alternative<Chunk>(v); }
Info& getInfo(void) { return std::get<Info>(v); }
const Info& getInfo(void) const { return std::get<Info>(v); }
Chunk& getChunk(void) { return std::get<Chunk>(v); }
const Chunk& getChunk(void) const { return std::get<Chunk>(v); }
};
// key is groupid + peerid
// TODO: replace with contact
entt::dense_map<uint64_t, entt::dense_map<uint8_t, Entry>> _data;
void tick(float delta);
Entry& emplaceInfo(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id, const Entry::Info& info);
Entry& emplaceChunk(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id, const Entry::Chunk& chunk);
bool containsPeer(uint32_t group_number, uint32_t peer_number) const { return _data.count(combine_ids(group_number, peer_number)); }
bool containsPeerTransfer(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id) const;
// less reliable, since we dont keep the list of chunk idecies
bool containsChunk(ObjectHandle o, size_t chunk_idx) const;
bool containsPeerChunk(uint32_t group_number, uint32_t peer_number, ObjectHandle o, size_t chunk_idx) const;
auto& getPeer(uint32_t group_number, uint32_t peer_number) { return _data.at(combine_ids(group_number, peer_number)); }
auto& getTransfer(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id) { return getPeer(group_number, peer_number).at(transfer_id); }
void removePeer(uint32_t group_number, uint32_t peer_number);
void removePeerTransfer(uint32_t group_number, uint32_t peer_number, uint8_t transfer_id);
size_t size(void) const;
size_t sizePeer(uint32_t group_number, uint32_t peer_number) const;
};

File diff suppressed because it is too large Load Diff

View File

@ -2,6 +2,7 @@
// solanaceae port of sha1 fts for NGCFT1
#include <solanaceae/object_store/object_store.hpp>
#include <solanaceae/contact/contact_model3.hpp>
#include <solanaceae/message3/registry_message_model.hpp>
#include <solanaceae/tox_contacts/tox_contact_model2.hpp>
@ -9,129 +10,105 @@
#include <solanaceae/ngc_ft1/ngcft1.hpp>
#include "./ft1_sha1_info.hpp"
#include "./sending_transfers.hpp"
#include "./receiving_transfers.hpp"
#include "./backends/sha1_mapped_filesystem.hpp"
#include <entt/entity/registry.hpp>
#include <entt/entity/handle.hpp>
#include <entt/container/dense_map.hpp>
#include <variant>
#include <random>
#include <atomic>
#include <mutex>
#include <list>
#include <chrono>
enum class Content : uint32_t {};
using ContentRegistry = entt::basic_registry<Content>;
using ContentHandle = entt::basic_handle<ContentRegistry>;
class SHA1_NGCFT1 : public RegistryMessageModelEventI, public NGCFT1EventI {
class SHA1_NGCFT1 : public ToxEventI, public RegistryMessageModelEventI, public ObjectStoreEventI, public NGCFT1EventI, public NGCEXTEventI {
ObjectStore2& _os;
ObjectStore2::SubscriptionReference _os_sr;
// TODO: backend abstraction
Contact3Registry& _cr;
RegistryMessageModel& _rmm;
RegistryMessageModelI& _rmm;
RegistryMessageModelI::SubscriptionReference _rmm_sr;
NGCFT1& _nft;
NGCFT1::SubscriptionReference _nft_sr;
ToxContactModel2& _tcm;
ToxEventProviderI& _tep;
ToxEventProviderI::SubscriptionReference _tep_sr;
NGCEXTEventProvider& _neep;
NGCEXTEventProvider::SubscriptionReference _neep_sr;
Backends::SHA1MappedFilesystem _mfb;
std::minstd_rand _rng {1337*11};
// registry per group?
ContentRegistry _contentr;
using clock = std::chrono::steady_clock;
clock::time_point _time_start_offset {clock::now()};
float getTimeNow(void) const {
return std::chrono::duration<float>{clock::now() - _time_start_offset}.count();
}
// limit this to each group?
entt::dense_map<SHA1Digest, ContentHandle> _info_to_content;
entt::dense_map<SHA1Digest, ObjectHandle> _info_to_content;
// sha1 chunk index
// TODO: optimize lookup
// TODO: multiple contents. hashes might be unique, but data is not
entt::dense_map<SHA1Digest, ContentHandle> _chunks;
entt::dense_map<SHA1Digest, ObjectHandle> _chunks;
// group_number, peer_number, content, chunk_hash, timer
std::deque<std::tuple<uint32_t, uint32_t, ContentHandle, SHA1Digest, float>> _queue_requested_chunk;
std::deque<std::tuple<uint32_t, uint32_t, ObjectHandle, SHA1Digest, float>> _queue_requested_chunk;
//void queueUpRequestInfo(uint32_t group_number, uint32_t peer_number, const SHA1Digest& hash);
void queueUpRequestChunk(uint32_t group_number, uint32_t peer_number, ContentHandle content, const SHA1Digest& hash);
void queueUpRequestChunk(uint32_t group_number, uint32_t peer_number, ObjectHandle content, const SHA1Digest& hash);
struct SendingTransfer {
struct Info {
// copy of info data
// too large?
std::vector<uint8_t> info_data;
};
struct Chunk {
ContentHandle content;
size_t chunk_index; // <.< remove offset_into_file
//uint64_t offset_into_file;
// or data?
// if memmapped, this would be just a pointer
};
std::variant<Info, Chunk> v;
float time_since_activity {0.f};
};
// key is groupid + peerid
entt::dense_map<uint64_t, entt::dense_map<uint8_t, SendingTransfer>> _sending_transfers;
struct ReceivingTransfer {
struct Info {
ContentHandle content;
// copy of info data
// too large?
std::vector<uint8_t> info_data;
};
struct Chunk {
ContentHandle content;
std::vector<size_t> chunk_indices;
// or data?
// if memmapped, this would be just a pointer
};
std::variant<Info, Chunk> v;
float time_since_activity {0.f};
};
// key is groupid + peerid
entt::dense_map<uint64_t, entt::dense_map<uint8_t, ReceivingTransfer>> _receiving_transfers;
SendingTransfers _sending_transfers;
ReceivingTransfers _receiving_transfers;
// makes request rotate around open content
std::deque<ContentHandle> _queue_content_want_info;
std::deque<ContentHandle> _queue_content_want_chunk;
std::deque<ObjectHandle> _queue_content_want_info;
std::atomic_bool _info_builder_dirty {false};
std::mutex _info_builder_queue_mutex;
//struct InfoBuilderEntry {
//// called on completion on the iterate thread
//// (owning)
//std::function<void(void)> fn;
//};
using InfoBuilderEntry = std::function<void(void)>;
std::list<InfoBuilderEntry> _info_builder_queue;
struct QBitsetEntry {
Contact3Handle c;
ObjectHandle o;
};
std::deque<QBitsetEntry> _queue_send_bitset;
static uint64_t combineIds(const uint32_t group_number, const uint32_t peer_number);
// FIXME: workaround missing contact events
// only used to remove participation on peer exit
entt::dense_map<uint64_t, Contact3Handle> _tox_peer_to_contact;
void updateMessages(ContentHandle ce);
void updateMessages(ObjectHandle ce);
std::optional<std::pair<uint32_t, uint32_t>> selectPeerForRequest(ContentHandle ce);
std::optional<std::pair<uint32_t, uint32_t>> selectPeerForRequest(ObjectHandle ce);
void queueBitsetSendFull(Contact3Handle c, ObjectHandle o);
File2I* objGetFile2Write(ObjectHandle o);
File2I* objGetFile2Read(ObjectHandle o);
public: // TODO: config
bool _udp_only {false};
size_t _max_concurrent_in {4};
size_t _max_concurrent_out {6};
// TODO: probably also includes running transfers rn (meh)
size_t _max_pending_requests {32}; // per content
size_t _max_concurrent_in {4}; // info only
size_t _max_concurrent_out {4*10}; // HACK: allow "ideal" number for 10 peers
public:
SHA1_NGCFT1(
ObjectStore2& os,
Contact3Registry& cr,
RegistryMessageModel& rmm,
RegistryMessageModelI& rmm,
NGCFT1& nft,
ToxContactModel2& tcm
ToxContactModel2& tcm,
ToxEventProviderI& tep,
NGCEXTEventProvider& neep
);
void iterate(float delta);
float iterate(float delta);
void onSendFileHashFinished(ObjectHandle o, Message3Registry* reg_ptr, Contact3 c, uint64_t ts);
protected: // rmm events (actions)
bool onEvent(const Message::Events::MessageUpdated&) override;
bool sendFilePath(const Contact3 c, std::string_view file_name, std::string_view file_path) override;
protected: // os events (actions)
bool onEvent(const ObjectStore::Events::ObjectUpdate&) override;
protected: // events
bool onEvent(const Events::NGCFT1_recv_request&) override;
@ -142,6 +119,13 @@ class SHA1_NGCFT1 : public RegistryMessageModelEventI, public NGCFT1EventI {
bool onEvent(const Events::NGCFT1_send_done&) override;
bool onEvent(const Events::NGCFT1_recv_message&) override;
bool sendFilePath(const Contact3 c, std::string_view file_name, std::string_view file_path) override;
bool onToxEvent(const Tox_Event_Group_Peer_Join* e) override;
bool onToxEvent(const Tox_Event_Group_Peer_Exit* e) override;
bool onEvent(const Events::NGCEXT_ft1_have&) override;
bool onEvent(const Events::NGCEXT_ft1_bitset&) override;
bool onEvent(const Events::NGCEXT_ft1_have_all&) override;
bool onEvent(const Events::NGCEXT_pc1_announce&) override;
};

View File

@ -0,0 +1,118 @@
#include "./transfer_stats_systems.hpp"
#include "./components.hpp"
#include <solanaceae/object_store/meta_components_file.hpp>
#include <iostream>
namespace Systems {
void transfer_tally_update(ObjectRegistry& os_reg, const float time_now) {
std::vector<Object> tally_to_remove;
// for each tally -> stats separated
os_reg.view<Components::TransferStatsTally>().each([&os_reg, time_now, &tally_to_remove](const auto ov, Components::TransferStatsTally& tally_comp) {
// for each peer
std::vector<Contact3> to_remove;
for (auto&& [peer_c, peer] : tally_comp.tally) {
auto& tss = os_reg.get_or_emplace<Components::TransferStatsSeparated>(ov).stats;
// special logic
// if newest older than 2sec
// discard
if (!peer.recently_sent.empty()) {
if (time_now - peer.recently_sent.back().time_point >= 2.f) {
// clean up stale
auto peer_in_stats_it = tss.find(peer_c);
if (peer_in_stats_it != tss.end()) {
peer_in_stats_it->second.rate_up = 0.f;
}
peer.recently_sent.clear();
if (peer.recently_received.empty()) {
to_remove.push_back(peer_c);
}
} else {
// else trim too old front
peer.trimSent(time_now);
size_t tally_bytes {0u};
for (auto& [time, bytes, accounted] : peer.recently_sent) {
if (!accounted) {
tss[peer_c].total_up += bytes;
accounted = true;
}
tally_bytes += bytes;
}
tss[peer_c].rate_up = tally_bytes / (time_now - peer.recently_sent.front().time_point + 0.00001f);
}
}
if (!peer.recently_received.empty()) {
if (time_now - peer.recently_received.back().time_point >= 2.f) {
// clean up stale
auto peer_in_stats_it = tss.find(peer_c);
if (peer_in_stats_it != tss.end()) {
peer_in_stats_it->second.rate_down = 0.f;
}
peer.recently_received.clear();
if (peer.recently_sent.empty()) {
to_remove.push_back(peer_c);
}
} else {
// else trim too old front
peer.trimReceived(time_now);
size_t tally_bytes {0u};
for (auto& [time, bytes, accounted] : peer.recently_received) {
if (!accounted) {
tss[peer_c].total_down += bytes;
accounted = true;
}
tally_bytes += bytes;
}
tss[peer_c].rate_down = tally_bytes / (time_now - peer.recently_received.front().time_point + 0.00001f);
}
}
}
for (const auto c : to_remove) {
tally_comp.tally.erase(c);
}
if (tally_comp.tally.empty()) {
tally_to_remove.push_back(ov);
}
});
// for each stats separated -> stats (total)
os_reg.view<Components::TransferStatsSeparated, Components::TransferStatsTally>().each([&os_reg](const auto ov, Components::TransferStatsSeparated& tss_comp, const auto&) {
auto& stats = os_reg.get_or_emplace<ObjComp::Ephemeral::File::TransferStats>(ov);
stats = {}; // reset
for (const auto& [_, peer_stats] : tss_comp.stats) {
stats.rate_up += peer_stats.rate_up;
stats.rate_down += peer_stats.rate_down;
stats.total_up += peer_stats.total_up;
stats.total_down += peer_stats.total_down;
}
#if 0
std::cout << "updated stats:\n"
<< " rate u:" << stats.rate_up/1024 << "KiB/s d:" << stats.rate_down/1024 << "KiB/s\n"
<< " total u:" << stats.total_up/1024 << "KiB d:" << stats.total_down/1024 << "KiB\n"
;
#endif
});
for (const auto ov : tally_to_remove) {
os_reg.remove<Components::TransferStatsTally>(ov);
}
}
} // Systems

View File

@ -0,0 +1,11 @@
#pragma once
#include <solanaceae/object_store/object_store.hpp>
namespace Systems {
// time only needs to be relative
void transfer_tally_update(ObjectRegistry& os_reg, const float time_now);
} // Systems

View File

@ -0,0 +1,13 @@
#pragma once
#include <cstdint>
inline static uint64_t combine_ids(const uint32_t group_number, const uint32_t peer_number) {
return (uint64_t(group_number) << 32) | peer_number;
}
inline static void decompose_ids(const uint64_t combined_id, uint32_t& group_number, uint32_t& peer_number) {
group_number = combined_id >> 32;
peer_number = combined_id & 0xffffffff;
}

View File

@ -0,0 +1,113 @@
#include "./ngc_hs2_recv.hpp"
#include <solanaceae/tox_contacts/tox_contact_model2.hpp>
NGCHS2Recv::NGCHS2Recv(
Contact3Registry& cr,
RegistryMessageModelI& rmm,
ToxContactModel2& tcm,
ToxEventProviderI& tep,
NGCFT1& nft
) :
_cr(cr),
_rmm(rmm),
_rmm_sr(_rmm.newSubRef(this)),
_tcm(tcm),
_tep_sr(tep.newSubRef(this)),
_nft(nft),
_nftep_sr(_nft.newSubRef(this))
{
_rmm_sr
.subscribe(RegistryMessageModel_Event::message_construct)
.subscribe(RegistryMessageModel_Event::message_updated)
.subscribe(RegistryMessageModel_Event::message_destroy)
;
_tep_sr
.subscribe(TOX_EVENT_GROUP_PEER_JOIN)
.subscribe(TOX_EVENT_GROUP_PEER_EXIT)
;
_nftep_sr
.subscribe(NGCFT1_Event::recv_request)
.subscribe(NGCFT1_Event::recv_init)
.subscribe(NGCFT1_Event::recv_data)
.subscribe(NGCFT1_Event::send_data)
.subscribe(NGCFT1_Event::recv_done)
.subscribe(NGCFT1_Event::send_done)
;
}
NGCHS2Recv::~NGCHS2Recv(void) {
}
float NGCHS2Recv::iterate(float delta) {
return 1000.f;
}
bool NGCHS2Recv::onEvent(const Message::Events::MessageConstruct&) {
return false;
}
bool NGCHS2Recv::onEvent(const Message::Events::MessageUpdated&) {
return false;
}
bool NGCHS2Recv::onEvent(const Message::Events::MessageDestory&) {
return false;
}
bool NGCHS2Recv::onEvent(const Events::NGCFT1_recv_request& e) {
if (
e.file_kind != NGCFT1_file_kind::HS2_INFO_RANGE_TIME &&
e.file_kind != NGCFT1_file_kind::HS2_SINGLE_MESSAGE
) {
return false; // not for us
}
return false;
}
bool NGCHS2Recv::onEvent(const Events::NGCFT1_recv_init& e) {
if (
e.file_kind != NGCFT1_file_kind::HS2_INFO_RANGE_TIME &&
e.file_kind != NGCFT1_file_kind::HS2_SINGLE_MESSAGE
) {
return false; // not for us
}
return false;
}
bool NGCHS2Recv::onEvent(const Events::NGCFT1_recv_data&) {
return false;
}
bool NGCHS2Recv::onEvent(const Events::NGCFT1_send_data&) {
return false;
}
bool NGCHS2Recv::onEvent(const Events::NGCFT1_recv_done&) {
return false;
}
bool NGCHS2Recv::onEvent(const Events::NGCFT1_send_done&) {
return false;
}
bool NGCHS2Recv::onToxEvent(const Tox_Event_Group_Peer_Join* e) {
const auto group_number = tox_event_group_peer_join_get_group_number(e);
const auto peer_number = tox_event_group_peer_join_get_peer_id(e);
const auto c = _tcm.getContactGroupPeer(group_number, peer_number);
assert(c);
// add to check list with inital cooldown
return false;
}
bool NGCHS2Recv::onToxEvent(const Tox_Event_Group_Peer_Exit* e) {
return false;
}

View File

@ -0,0 +1,83 @@
#pragma once
#include <solanaceae/toxcore/tox_event_interface.hpp>
#include <solanaceae/contact/contact_model3.hpp>
#include <solanaceae/message3/registry_message_model.hpp>
#include <solanaceae/ngc_ft1/ngcft1.hpp>
#include <entt/container/dense_map.hpp>
// fwd
class ToxContactModel2;
// time ranges
// should we just do last x minutes like zngchs?
// properly done, we need to use:
// - Message::Components::ViewCurserBegin
// - Message::Components::ViewCurserEnd
//
// on startup, manually check all registries for ranges (meh) (do later)
// listen on message events, check if range, see if range satisfied recently
// deal with a queue, and delay (at least 1sec, 3-10sec after a peer con change)
// or we always overrequest (eg 48h), and only fetch messages in, or close to range
class NGCHS2Recv : public RegistryMessageModelEventI, public ToxEventI, public NGCFT1EventI {
Contact3Registry& _cr;
RegistryMessageModelI& _rmm;
RegistryMessageModelI::SubscriptionReference _rmm_sr;
ToxContactModel2& _tcm;
ToxEventProviderI::SubscriptionReference _tep_sr;
NGCFT1& _nft;
NGCFT1EventProviderI::SubscriptionReference _nftep_sr;
// describes our knowlage of a remote peer
struct RemoteInfo {
// list of all ppk+mid+ts they sent us (filtered by reqs, like range, ppk...)
// with when it last sent a range? hmm
};
entt::dense_map<Contact3, RemoteInfo> _remote_info;
// open/running info requests (by c)
// open/running info responses (by c)
static const bool _only_send_self_observed {true};
static const int64_t _max_time_into_past_default {60}; // s
public:
NGCHS2Recv(
Contact3Registry& cr,
RegistryMessageModelI& rmm,
ToxContactModel2& tcm,
ToxEventProviderI& tep,
NGCFT1& nf
);
~NGCHS2Recv(void);
float iterate(float delta);
// add to queue with timer
// check and updates all existing cursers for giving reg in queue
void enqueueWantCurser(Message3Handle m);
protected:
bool onEvent(const Message::Events::MessageConstruct&) override;
bool onEvent(const Message::Events::MessageUpdated&) override;
bool onEvent(const Message::Events::MessageDestory&) override;
protected:
bool onEvent(const Events::NGCFT1_recv_request&) override;
bool onEvent(const Events::NGCFT1_recv_init&) override;
bool onEvent(const Events::NGCFT1_recv_data&) override;
bool onEvent(const Events::NGCFT1_send_data&) override;
bool onEvent(const Events::NGCFT1_recv_done&) override;
bool onEvent(const Events::NGCFT1_send_done&) override;
protected:
bool onToxEvent(const Tox_Event_Group_Peer_Join* e) override;
bool onToxEvent(const Tox_Event_Group_Peer_Exit* e) override;
};

View File

@ -0,0 +1,286 @@
#include "./ngc_hs2_send.hpp"
#include <solanaceae/util/span.hpp>
#include <solanaceae/tox_contacts/tox_contact_model2.hpp>
#include <solanaceae/contact/components.hpp>
#include <iostream>
// https://www.youtube.com/watch?v=AdAqsgga3qo
namespace Components {
void IncommingInfoRequestQueue::queueRequest(const InfoRequest& new_request) {
// TODO: do more than exact dedupe
for (const auto& [ts_start, ts_end] : _queue) {
if (ts_start == new_request.ts_start && ts_end == new_request.ts_end) {
return; // already enqueued
}
}
_queue.push_back(new_request);
}
void IncommingMsgRequestQueue::queueRequest(const SingleMessageRequest& new_request) {
for (const auto& [ppk, mid, ts] : _queue) {
if (mid == new_request.mid && ts == new_request.ts && ppk == new_request.ppk) {
return; // already enqueued
}
}
_queue.push_back(new_request);
}
} // Components
NGCHS2Send::NGCHS2Send(
Contact3Registry& cr,
RegistryMessageModelI& rmm,
ToxContactModel2& tcm,
NGCFT1& nft
) :
_cr(cr),
_rmm(rmm),
_tcm(tcm),
_nft(nft),
_nftep_sr(_nft.newSubRef(this))
{
_nftep_sr
.subscribe(NGCFT1_Event::recv_request)
//.subscribe(NGCFT1_Event::recv_init) // we only send init
//.subscribe(NGCFT1_Event::recv_data) // we only send data
.subscribe(NGCFT1_Event::send_data)
//.subscribe(NGCFT1_Event::recv_done)
.subscribe(NGCFT1_Event::send_done)
;
}
NGCHS2Send::~NGCHS2Send(void) {
}
float NGCHS2Send::iterate(float delta) {
// limit how often we update here (new fts usually)
if (_iterate_heat > 0.f) {
_iterate_heat -= delta;
return 1000.f;
} else {
_iterate_heat = _iterate_cooldown;
}
// work request queue
// check if already running, discard
auto fn_iirq = [this](auto&& view) {
for (auto&& [cv, iirq] : view.each()) {
Contact3Handle c{_cr, cv};
auto& iirr = c.get_or_emplace<Components::IncommingInfoRequestRunning>();
// dedup queued from running
if (iirr._list.size() >= _max_parallel_per_peer) {
continue;
}
// new ft here?
}
};
auto fn_imrq = [this](auto&& view) {
for (auto&& [cv, imrq] : view.each()) {
Contact3Handle c{_cr, cv};
auto& imrr = c.get_or_emplace<Components::IncommingMsgRequestRunning>();
// dedup queued from running
if (imrr._list.size() >= _max_parallel_per_peer) {
continue;
}
// new ft here?
}
};
// first handle range requests on weak self
//for (auto&& [cv, iirq] : _cr.view<Contact::Components::TagSelfWeak, Components::IncommingInfoRequestQueue>().each()) {
fn_iirq(_cr.view<Contact::Components::TagSelfWeak, Components::IncommingInfoRequestQueue>());
// then handle messages on weak self
//for (auto&& [cv, imrq] : _cr.view<Contact::Components::TagSelfWeak, Components::IncommingMsgRequestQueue>().each()) {
fn_imrq(_cr.view<Contact::Components::TagSelfWeak, Components::IncommingMsgRequestQueue>());
// we could stop here, if too much is already running
// then range on others
//for (auto&& [cv, iirq] : _cr.view<Components::IncommingInfoRequestQueue>(entt::exclude_t<Contact::Components::TagSelfWeak>{}).each()) {
fn_iirq(_cr.view<Components::IncommingInfoRequestQueue>(entt::exclude_t<Contact::Components::TagSelfWeak>{}));
// then messages on others
//for (auto&& [cv, imrq] : _cr.view<Components::IncommingMsgRequestQueue>(entt::exclude_t<Contact::Components::TagSelfWeak>{}).each()) {
fn_imrq(_cr.view<Components::IncommingMsgRequestQueue>(entt::exclude_t<Contact::Components::TagSelfWeak>{}));
return 1000.f;
}
template<typename Type>
static uint64_t deserlSimpleType(ByteSpan bytes) {
if (bytes.size < sizeof(Type)) {
throw int(1);
}
Type value;
for (size_t i = 0; i < sizeof(Type); i++) {
value |= Type(bytes[i]) << (i*8);
}
return value;
}
static uint32_t deserlMID(ByteSpan mid_bytes) {
return deserlSimpleType<uint32_t>(mid_bytes);
}
static uint64_t deserlTS(ByteSpan ts_bytes) {
return deserlSimpleType<uint64_t>(ts_bytes);
}
void NGCHS2Send::handleRange(Contact3Handle c, const Events::NGCFT1_recv_request& e) {
ByteSpan fid{e.file_id, e.file_id_size};
// TODO: better size check
if (fid.size != sizeof(uint64_t)+sizeof(uint64_t)) {
std::cerr << "NGCHS2S error: range not lange enough\n";
return;
}
// seconds
uint64_t ts_start{0};
uint64_t ts_end{0};
// parse
try {
ByteSpan ts_start_bytes{fid.ptr, sizeof(uint64_t)};
ts_start = deserlTS(ts_start_bytes);
ByteSpan ts_end_bytes{ts_start_bytes.ptr+ts_start_bytes.size, sizeof(uint64_t)};
ts_end = deserlTS(ts_end_bytes);
} catch (...) {
std::cerr << "NGCHS2S error: failed to parse range\n";
return;
}
// dedupe insert into queue
// how much overlap do we allow?
c.get_or_emplace<Components::IncommingInfoRequestQueue>().queueRequest({
ts_start,
ts_end,
});
}
void NGCHS2Send::handleSingleMessage(Contact3Handle c, const Events::NGCFT1_recv_request& e) {
ByteSpan fid{e.file_id, e.file_id_size};
// TODO: better size check
if (fid.size != 32+sizeof(uint32_t)+sizeof(uint64_t)) {
std::cerr << "NGCHS2S error: singlemessage not lange enough\n";
return;
}
ByteSpan ppk;
uint32_t mid {0};
uint64_t ts {0}; // deciseconds
// parse
try {
// - ppk
// TOX_GROUP_PEER_PUBLIC_KEY_SIZE (32)
ppk = {fid.ptr, 32};
// - mid
ByteSpan mid_bytes{fid.ptr+ppk.size, sizeof(uint32_t)};
mid = deserlMID(mid_bytes);
// - ts
ByteSpan ts_bytes{mid_bytes.ptr+mid_bytes.size, sizeof(uint64_t)};
ts = deserlTS(ts_bytes);
} catch (...) {
std::cerr << "NGCHS2S error: failed to parse singlemessage\n";
return;
}
// file content
// - message type (text/textaction/file(ft1sha1))
// - if text/textaction
// - text (string)
// - else if file
// - file type
// - file id
// for queue, we need group, peer, msg_ppk, msg_mid, msg_ts
// dedupe insert into queue
c.get_or_emplace<Components::IncommingMsgRequestQueue>().queueRequest({
ppk,
mid,
ts,
});
}
bool NGCHS2Send::onEvent(const Message::Events::MessageConstruct&) {
return false;
}
bool NGCHS2Send::onEvent(const Message::Events::MessageUpdated&) {
return false;
}
bool NGCHS2Send::onEvent(const Message::Events::MessageDestory&) {
return false;
}
bool NGCHS2Send::onEvent(const Events::NGCFT1_recv_request& e) {
if (
e.file_kind != NGCFT1_file_kind::HS2_INFO_RANGE_TIME &&
e.file_kind != NGCFT1_file_kind::HS2_SINGLE_MESSAGE
) {
return false; // not for us
}
// TODO: when is it done from queue?
auto c = _tcm.getContactGroupPeer(e.group_number, e.peer_number);
if (!c) {
return false; // how
}
// is other peer allowed to make requests
//bool quick_allow {false};
bool quick_allow {true}; // HACK: disable all restrictions for this early test
// TODO: quick deny?
{
// - tagged as weakself
if (!quick_allow && c.all_of<Contact::Components::TagSelfWeak>()) {
quick_allow = true;
}
// - sub perm level??
// - out of max time range (ft specific, not a quick_allow)
}
if (e.file_kind == NGCFT1_file_kind::HS2_INFO_RANGE_TIME) {
handleRange(c, e);
} else if (e.file_kind == NGCFT1_file_kind::HS2_SINGLE_MESSAGE) {
handleSingleMessage(c, e);
}
return true;
}
bool NGCHS2Send::onEvent(const Events::NGCFT1_send_data&) {
return false;
}
bool NGCHS2Send::onEvent(const Events::NGCFT1_send_done&) {
return false;
}

View File

@ -0,0 +1,116 @@
#pragma once
#include <solanaceae/toxcore/tox_event_interface.hpp>
#include <solanaceae/contact/contact_model3.hpp>
#include <solanaceae/message3/registry_message_model.hpp>
#include <solanaceae/ngc_ft1/ngcft1.hpp>
#include <entt/container/dense_map.hpp>
#include <solanaceae/util/span.hpp>
#include <vector>
// fwd
class ToxContactModel2;
struct InfoRequest {
uint64_t ts_start{0};
uint64_t ts_end{0};
};
struct SingleMessageRequest {
ByteSpan ppk;
uint32_t mid {0};
uint64_t ts {0}; // deciseconds
};
// TODO: move to own file
namespace Components {
struct IncommingInfoRequestQueue {
std::vector<InfoRequest> _queue;
// we should remove/notadd queued requests
// that are subsets of same or larger ranges
void queueRequest(const InfoRequest& new_request);
};
struct IncommingInfoRequestRunning {
struct Entry {
InfoRequest ir;
std::vector<uint8_t> data; // trasfer data in memory
};
entt::dense_map<uint8_t, Entry> _list;
};
struct IncommingMsgRequestQueue {
// optimize dup lookups (this list could be large)
std::vector<SingleMessageRequest> _queue;
// removes dups
void queueRequest(const SingleMessageRequest& new_request);
};
struct IncommingMsgRequestRunning {
struct Entry {
SingleMessageRequest smr;
std::vector<uint8_t> data; // trasfer data in memory
};
// make more efficent? this list is very short
entt::dense_map<uint8_t, Entry> _list;
};
} // Components
class NGCHS2Send : public RegistryMessageModelEventI, public NGCFT1EventI {
Contact3Registry& _cr;
RegistryMessageModelI& _rmm;
ToxContactModel2& _tcm;
NGCFT1& _nft;
NGCFT1EventProviderI::SubscriptionReference _nftep_sr;
float _iterate_heat {0.f};
constexpr static float _iterate_cooldown {1.22f}; // sec
// open/running info requests (by c)
// comp on peer c
// open/running info responses (by c)
// comp on peer c
// limit to 2 uploads per peer simultaniously
// TODO: increase for prod (4?)
// currently per type
constexpr static size_t _max_parallel_per_peer {2};
constexpr static bool _only_send_self_observed {true};
constexpr static int64_t _max_time_into_past_default {60*15}; // s
public:
NGCHS2Send(
Contact3Registry& cr,
RegistryMessageModelI& rmm,
ToxContactModel2& tcm,
NGCFT1& nf
);
~NGCHS2Send(void);
float iterate(float delta);
void handleRange(Contact3Handle c, const Events::NGCFT1_recv_request&);
void handleSingleMessage(Contact3Handle c, const Events::NGCFT1_recv_request&);
protected:
bool onEvent(const Message::Events::MessageConstruct&) override;
bool onEvent(const Message::Events::MessageUpdated&) override;
bool onEvent(const Message::Events::MessageDestory&) override;
protected:
bool onEvent(const Events::NGCFT1_recv_request&) override;
bool onEvent(const Events::NGCFT1_send_data&) override;
bool onEvent(const Events::NGCFT1_send_done&) override;
};

View File

@ -0,0 +1,77 @@
# [NGC] Group-History-Sync (v2) [PoC] [Draft]
Simple group history sync that uses `peer public key` + `message_id` + `timestamp` (`ppk+mid+ts`) to, mostly, uniquely identify messages and deliver them.
## Requirements
TODO
### File transfers
For sending packs of messages. A single message can be larger than a single custom packet, so this is a must-have.
## Procedure
Peer A can request `ppk+mid+ts` list for a given time range from peer B.
Peer B then sends a filetransfer (with special file type) of list of `ppk+mid+ts`.
Optionally compressed. (Delta-coding / zstd)
Peer A keeps doing that until the desired time span is covered.
After that or simultaniously, Peer A requests messages from peer B, either indivitually, or packed? in ranges?.
Optionally compressed.
During all that, peer B usually does the same thing to peer A.
## Traffic savings
It is recomended to remember if a range has been requested and answered from a given peer, to reduce traffic.
While compression is optional, it is recommended.
## Message uniqueness
This protocol relies on the randomness of `message_id` and the clocks to be more or less synchronized.
However, `message_id` can be manipulated freely by any peer, this can make messages appear as duplicates.
This can be used here, if you don't wish your messages to be syncronized (to an extent).
## Security
Only sync publicly sent/recieved messages.
Only allow sync or extended time ranges from peers you trust (enough).
The default shall be to not offer any messages.
Indirect messages shall be low in credibility, while direct synced (by author), with mid credibility.
Either only high or mid credibility shall be sent.
Manual exceptions to all can be made at the users discretion, eg for other self owned devices.
## File transfer requests
TODO: is reusing the ft request api a good idea for this?
| fttype | name | content (ft id) |
|------------|------|---------------------|
| 0x00000f00 | time range | - ts start </br> - ts end </br> - supported compression? |
| | TODO: id range based request? | |
| 0x00000f01 | single message | - ppk </br> - mid </br> - ts |
## File transfers
| fttype | name | content |
|------------|------|---------------------|
| 0x00000f00 | time range | - feature bitset (1byte? different compressions?) </br> - ts start </br> - ts end </br> - list size </br> \\+ entry `ppk` </br> \\+ entry `mid` </br> \\+ entry `ts` |
| 0x00000f01 | single message | - message type (text/textaction/file) </br> - text if text or action, file type and file id if file |
## TODO
- [ ] figure out a pro-active approach (instead of waiting for a range request)
- [ ] compression in the ft layer? (would make it reusable) hint/autodetect?