Compare commits
No commits in common. "525446cc4a31ca59a0b1524ca5b227be04174dcb" and "1e7a1cec2d247b1a12d02402ee0575dbc89eedca" have entirely different histories.
525446cc4a
...
1e7a1cec2d
63
README.md
63
README.md
@ -3,7 +3,7 @@
|
|||||||
Read and manipulate tox profile files. It started as a simple script from
|
Read and manipulate tox profile files. It started as a simple script from
|
||||||
<https://stackoverflow.com/questions/30901873/what-format-are-tox-files-stored-in>
|
<https://stackoverflow.com/questions/30901873/what-format-are-tox-files-stored-in>
|
||||||
|
|
||||||
For the moment tox_savefile.py just reads a Tox profile and
|
For the moment logging_tox_savefile.py just reads a Tox profile and
|
||||||
prints to stdout various things that it finds. Then it writes what it
|
prints to stdout various things that it finds. Then it writes what it
|
||||||
found in YAML to stderr. Later it can be extended to print out JSON
|
found in YAML to stderr. Later it can be extended to print out JSON
|
||||||
or YAML, and then extended to accept JSON or YAML to write a profile.
|
or YAML, and then extended to accept JSON or YAML to write a profile.
|
||||||
@ -11,8 +11,8 @@ or YAML, and then extended to accept JSON or YAML to write a profile.
|
|||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
Reads a tox profile and prints out information on what's in there to stderr.
|
Reads a tox profile and prints out information on what's in there to stderr.
|
||||||
Call it with one argument, the filename of the profile for the decrypt, edit
|
Call it with one argument, the filename of the profile for the decrypt or info
|
||||||
or info commands, or the filename of the nodes file for the nodes command.
|
commands, or the filename of the nodes file for the nodes command.
|
||||||
|
|
||||||
3 commands are supported:
|
3 commands are supported:
|
||||||
1. ```--command decrypt``` decrypts the profile and writes to the result
|
1. ```--command decrypt``` decrypts the profile and writes to the result
|
||||||
@ -22,7 +22,7 @@ to stdout
|
|||||||
a profile
|
a profile
|
||||||
|
|
||||||
```
|
```
|
||||||
usage: tox_savefile.py [-h] [--output OUTPUT]
|
usage: logging_tox_savefile.py [-h] [--output OUTPUT]
|
||||||
[--command {info,decrypt,nodes}]
|
[--command {info,decrypt,nodes}]
|
||||||
[--indent INDENT]
|
[--indent INDENT]
|
||||||
[--info {info,repr,yaml,json,pprint,nmap_udp,nmap_tcp}]
|
[--info {info,repr,yaml,json,pprint,nmap_udp,nmap_tcp}]
|
||||||
@ -52,60 +52,28 @@ Optional arguments:
|
|||||||
|
|
||||||
```info``` will output the profile on stdout, or to a file with ```--output```
|
```info``` will output the profile on stdout, or to a file with ```--output```
|
||||||
|
|
||||||
Choose one of ```{info,repr,yaml,json,pprint,save}```
|
Choose one of ```{info,repr,yaml,json,pprint}```
|
||||||
for the format for info command.
|
for the format for info command.
|
||||||
|
|
||||||
Choose one of ```{nmap_udp,nmap_tcp}```
|
Choose one of ```{nmap_udp,nmap_tcp}```
|
||||||
to run tests using ```nmap``` for the ```DHT``` and ```TCP_RELAY```
|
to run tests using ```nmap``` for the ```DHT``` and ```TCP_RELAY```
|
||||||
sections of the profile. Reguires ```nmap``` and uses ```sudo```.
|
sections of the profile. Reguires ```nmap``` and uses ```sudo```.
|
||||||
|
|
||||||
#### Saving a copy
|
|
||||||
|
|
||||||
The code now can generate a saved copy of the profile as it parses the profile.
|
|
||||||
Use the command ```--command info --info save``` with ```--output```
|
|
||||||
and a filename, to process the file with info to stderr, and it will
|
|
||||||
save an copy of the file to the ```--output``` (unencrypted).
|
|
||||||
|
|
||||||
It may be shorter than the original profile by up to 512 bytes, as the
|
|
||||||
original toxic profile is padded at the end with nulls (or maybe in the
|
|
||||||
decryption).
|
|
||||||
|
|
||||||
### --command nodes
|
### --command nodes
|
||||||
|
|
||||||
Takes a DHTnodes.json file as an argument.
|
Takes a DHTnodes.json file as an argument.
|
||||||
Choose one of ```{select_tcp,select_udp,select_version}```
|
Choose one of ```{select_tcp,select_udp,select_version}```
|
||||||
for ```--nodes``` to select TCP nodes, UDP nodes,
|
for ```--nodes``` to select TCP nodes, UDP nodes or nodes with the latest version.
|
||||||
or nodes with the latest version. Requires ```jq```.
|
Requires ```jq```.
|
||||||
|
|
||||||
Choose one of ```{nmap_tcp,nmap_udp}``` to run tests using ```nmap```
|
Choose one of ```{nmap_tcp,nmap_udp}```
|
||||||
for the ```status_tcp==True``` and ```status_udp==True``` nodes.
|
to run tests using ```nmap``` for the ```tcp``` and ```udp```
|
||||||
Reguires ```nmap``` and uses ```sudo```.
|
nodes. Reguires ```nmap``` and uses ```sudo```.
|
||||||
|
|
||||||
### --command decrypt
|
### --command decrypt
|
||||||
|
|
||||||
Decrypt a profile.
|
Decrypt a profile.
|
||||||
|
|
||||||
### --command edit
|
|
||||||
|
|
||||||
The code now can generate an edited copy of the profile.
|
|
||||||
Use the command ```--command edit --edit section,key,val``` with
|
|
||||||
```--output``` and a filename, to process the file with info to stderr,
|
|
||||||
and it will save an copy of the edited file to the
|
|
||||||
```--output``` file (unencrypted). There's not much editing yet; give
|
|
||||||
```--command edit --edit help``` to get a list of what Available Sections,
|
|
||||||
and Supported Quads (section,num,key,type) that can be edited.
|
|
||||||
Currently it is:
|
|
||||||
```
|
|
||||||
NAME,.,Nick_name,str
|
|
||||||
STATUSMESSAGE,.,Status_message,str
|
|
||||||
STATUS,.,Online_status,int
|
|
||||||
```
|
|
||||||
The ```num``` field is to accomodate sections that have lists:
|
|
||||||
* ```.``` is a placeholder for sections that don't have lists.
|
|
||||||
* ```<int>``` is for the nth element of the list, zero-based.
|
|
||||||
* ```*``` is for all elements of the list.
|
|
||||||
|
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
If you want to read encrypted profiles, you need to download
|
If you want to read encrypted profiles, you need to download
|
||||||
@ -125,22 +93,11 @@ If you want to write in YAML, you need Python yaml:
|
|||||||
If you have coloredlogs installed it will make use of it:
|
If you have coloredlogs installed it will make use of it:
|
||||||
<https://pypi.org/project/coloredlogs/>
|
<https://pypi.org/project/coloredlogs/>
|
||||||
|
|
||||||
For the ```select``` and ```nmap``` commands, the ```jq``` utility is
|
|
||||||
required. It's available in most distros, or <https://stedolan.github.io/jq/>
|
|
||||||
|
|
||||||
For the ```nmap``` commands, the ```nmap``` utility is
|
|
||||||
required. It's available in most distros, or <https://nmap.org/>
|
|
||||||
|
|
||||||
## Future Directions
|
## Future Directions
|
||||||
|
|
||||||
This has not been tested on Windwoes, but is should be simple to fix.
|
|
||||||
|
|
||||||
Because it's written in Python it is easy to extend to, for example,
|
Because it's written in Python it is easy to extend to, for example,
|
||||||
rekeying a profile when copying a profile to a new device:
|
rekeying a profile when copying a profile to a new device:
|
||||||
<https://git.plastiras.org/emdee/tox_profile/wiki/MultiDevice-Announcements-POC>
|
<https://git.plastiras.org/emdee/tox_profile/wiki/MultiDevice-Announcements-POC>
|
||||||
|
|
||||||
|
|
||||||
## Specification
|
|
||||||
|
|
||||||
There is a copy of the Tox [spec](https://toktok.ltd/spec.html)
|
There is a copy of the Tox [spec](https://toktok.ltd/spec.html)
|
||||||
in the repo - it is missing any description of the groups section.
|
in the repo - it is missing any description of the groups section.
|
||||||
|
@ -20,7 +20,7 @@ commands, or the filename of the nodes file for the nodes command.
|
|||||||
"""
|
"""
|
||||||
--output Destination for info/decrypt - defaults to stdout
|
--output Destination for info/decrypt - defaults to stdout
|
||||||
--info default='info',
|
--info default='info',
|
||||||
choices=['info', 'save', 'repr', 'yaml','json', 'pprint']
|
choices=['info', 'repr', 'yaml','json', 'pprint']
|
||||||
with --info=info prints info about the profile to stderr
|
with --info=info prints info about the profile to stderr
|
||||||
nmap_udp - test DHT nodes with nmap
|
nmap_udp - test DHT nodes with nmap
|
||||||
nmap_tcp - test TCP_RELAY nodes with nmap
|
nmap_tcp - test TCP_RELAY nodes with nmap
|
||||||
@ -76,7 +76,6 @@ except ImportError as e:
|
|||||||
try:
|
try:
|
||||||
# https://git.plastiras.org/emdee/toxygen_wrapper
|
# https://git.plastiras.org/emdee/toxygen_wrapper
|
||||||
from wrapper.toxencryptsave import ToxEncryptSave
|
from wrapper.toxencryptsave import ToxEncryptSave
|
||||||
from wrapper_tests.support_http import download_url
|
|
||||||
except ImportError as e:
|
except ImportError as e:
|
||||||
print(f"Import Error {e}")
|
print(f"Import Error {e}")
|
||||||
print("Download toxygen_wrapper to deal with encrypted tox files, from:")
|
print("Download toxygen_wrapper to deal with encrypted tox files, from:")
|
||||||
@ -86,31 +85,24 @@ except ImportError as e:
|
|||||||
print("and libtoxencryptsave.so into wrapper/../libs/")
|
print("and libtoxencryptsave.so into wrapper/../libs/")
|
||||||
print("Link all 3 from libtoxcore.so if you have only libtoxcore.so")
|
print("Link all 3 from libtoxcore.so if you have only libtoxcore.so")
|
||||||
ToxEncryptSave = None
|
ToxEncryptSave = None
|
||||||
download_url = None
|
try:
|
||||||
|
from wrapper_tests.support_http import download_url
|
||||||
|
except:
|
||||||
|
try:
|
||||||
|
from support_http import download_url
|
||||||
|
except ImportError as e:
|
||||||
|
print(f"Import Error {e}")
|
||||||
|
print("Download toxygen_wrapper to deal with encrypted tox files, from:")
|
||||||
|
print("https://git.plastiras.org/emdee/toxygen_wrapper")
|
||||||
|
download_url = None
|
||||||
|
|
||||||
LOG = logging.getLogger('TSF')
|
LOG = logging.getLogger('TSF')
|
||||||
|
|
||||||
# Fix for Windows
|
|
||||||
sDIR = os.environ.get('TMPDIR', '/tmp')
|
|
||||||
sTOX_VERSION = "1000002018"
|
|
||||||
bHAVE_NMAP = shutil.which('nmap')
|
bHAVE_NMAP = shutil.which('nmap')
|
||||||
|
sDIR = os.environ.get('TMPDIR', '/tmp')
|
||||||
|
# nodes
|
||||||
|
sTOX_VERSION = "1000002018"
|
||||||
bHAVE_JQ = shutil.which('jq')
|
bHAVE_JQ = shutil.which('jq')
|
||||||
bMARK = b'\x00\x00\x00\x00\x1f\x1b\xed\x15'
|
|
||||||
bDEBUG = 'DEBUG' in os.environ and os.environ['DEBUG'] != 0
|
|
||||||
def trace(s): LOG.log(LOG.level, '+ ' +s)
|
|
||||||
LOG.trace = trace
|
|
||||||
|
|
||||||
global lOUT, bOUT, aOUT, sENC
|
|
||||||
lOUT = []
|
|
||||||
aOUT = {}
|
|
||||||
bOUT = b''
|
|
||||||
sENC = 'utf-8'
|
|
||||||
# grep '#''#' logging_tox_savefile.py|sed -e 's/.* //'
|
|
||||||
sEDIT_HELP = """
|
|
||||||
NAME,.,Nick_name,str
|
|
||||||
STATUSMESSAGE,.,Status_message,str
|
|
||||||
STATUS,.,Online_status,int
|
|
||||||
"""
|
|
||||||
|
|
||||||
#messenger.c
|
#messenger.c
|
||||||
MESSENGER_STATE_TYPE_NOSPAMKEYS = 1
|
MESSENGER_STATE_TYPE_NOSPAMKEYS = 1
|
||||||
@ -207,35 +199,30 @@ Length Contents
|
|||||||
o = delta+1+32+1024+1+2+128; l = 2
|
o = delta+1+32+1024+1+2+128; l = 2
|
||||||
nsize = struct.unpack_from(">H", result, o)[0]
|
nsize = struct.unpack_from(">H", result, o)[0]
|
||||||
o = delta+1+32+1024+1+2; l = 128
|
o = delta+1+32+1024+1+2; l = 128
|
||||||
name = str(result[o:o+nsize], sENC)
|
name = str(result[o:o+nsize], 'utf-8')
|
||||||
|
|
||||||
o = delta+1+32+1024+1+2+128+2+1007; l = 2
|
o = delta+1+32+1024+1+2+128+2+1007; l = 2
|
||||||
msize = struct.unpack_from(">H", result, o)[0]
|
msize = struct.unpack_from(">H", result, o)[0]
|
||||||
o = delta+1+32+1024+1+2+128+2; l = 1007
|
o = delta+1+32+1024+1+2+128+2; l = 1007
|
||||||
mame = str(result[o:o+msize], sENC)
|
mame = str(result[o:o+msize], 'utf-8')
|
||||||
LOG.info(f"Friend #{i} {dStatus[status]} {name} {pk}")
|
LOG.info(f"Friend #{i} {dStatus[status]} {name} {pk}")
|
||||||
lIN += [{"Status": dStatus[status],
|
lIN += [{"Status": dStatus[status],
|
||||||
"Name": name,
|
"Name": name,
|
||||||
"Pk": pk}]
|
"Pk": pk}]
|
||||||
return lIN
|
return lIN
|
||||||
|
|
||||||
def lProcessGroups(state, index, length, result, label="GROUPS"):
|
def lProcessGroups(state, index, length, result):
|
||||||
"""
|
|
||||||
No GROUPS description in spec.html
|
|
||||||
"""
|
|
||||||
global sENC
|
|
||||||
lIN = []
|
lIN = []
|
||||||
i = 0
|
i = 0
|
||||||
if not msgpack:
|
if not msgpack:
|
||||||
LOG.warn(f"process_chunk Groups = NO msgpack bytes={length}")
|
LOG.debug(f"TODO process_chunk Groups = no msgpack bytes={length}")
|
||||||
return []
|
return []
|
||||||
try:
|
try:
|
||||||
groups = msgpack.loads(result, raw=True)
|
groups = msgpack.loads(result, raw=True)
|
||||||
LOG.info(f"{label} {len(groups)} groups")
|
LOG.debug(f"TODO process_chunk Groups len={len(groups)}")
|
||||||
for group in groups:
|
for group in groups:
|
||||||
assert len(group) == 7, group
|
assert len(group) == 7, group
|
||||||
i += 1
|
i += 1
|
||||||
|
|
||||||
state_values, \
|
state_values, \
|
||||||
state_bin, \
|
state_bin, \
|
||||||
topic_info, \
|
topic_info, \
|
||||||
@ -254,8 +241,7 @@ def lProcessGroups(state, index, length, result, label="GROUPS"):
|
|||||||
topic_lock, \
|
topic_lock, \
|
||||||
voice_state = state_values
|
voice_state = state_values
|
||||||
LOG.info(f"lProcessGroups #{i} version={version}")
|
LOG.info(f"lProcessGroups #{i} version={version}")
|
||||||
dBINS = {"Version": version,
|
dBINS = {"Version": version}
|
||||||
"Privacy_state": privacy_state}
|
|
||||||
lIN += [{"State_values": dBINS}]
|
lIN += [{"State_values": dBINS}]
|
||||||
|
|
||||||
assert len(state_bin) == 5, state_bin
|
assert len(state_bin) == 5, state_bin
|
||||||
@ -269,14 +255,14 @@ def lProcessGroups(state, index, length, result, label="GROUPS"):
|
|||||||
lIN += [{"State_bin": dBINS}]
|
lIN += [{"State_bin": dBINS}]
|
||||||
|
|
||||||
assert len(topic_info) == 6, topic_info
|
assert len(topic_info) == 6, topic_info
|
||||||
topic_info_topic = str(topic_info[3], sENC)
|
topic_info_topic = str(topic_info[3], 'utf-8')
|
||||||
LOG.info(f"lProcessGroups #{i} topic_info_topic={topic_info_topic}")
|
LOG.info(f"lProcessGroups #{i} topic_info_topic={topic_info_topic}")
|
||||||
dBINS = {"topic_info_topic": topic_info_topic}
|
dBINS = {"topic_info_topic": topic_info_topic}
|
||||||
lIN += [{"Topic_info": dBINS}]
|
lIN += [{"Topic_info": dBINS}]
|
||||||
|
|
||||||
assert len(mod_list) == 2, mod_list
|
assert len(mod_list) == 2, mod_list
|
||||||
num_moderators = mod_list[0]
|
num_moderators = mod_list[0]
|
||||||
LOG.info(f"lProcessGroups #{i} num moderators={mod_list[0]}")
|
LOG.debug(f"lProcessGroups #{i} num moderators={mod_list[0]}")
|
||||||
#define CRYPTO_SIGN_PUBLIC_KEY_SIZE 32
|
#define CRYPTO_SIGN_PUBLIC_KEY_SIZE 32
|
||||||
mods = mod_list[1]
|
mods = mod_list[1]
|
||||||
assert len(mods) % 32 == 0, len(mods)
|
assert len(mods) % 32 == 0, len(mods)
|
||||||
@ -286,7 +272,7 @@ def lProcessGroups(state, index, length, result, label="GROUPS"):
|
|||||||
mod = mods[j*32:j*32 + 32]
|
mod = mods[j*32:j*32 + 32]
|
||||||
LOG.info(f"lProcessGroups group#{i} mod#{j} sig_pk={bin_to_hex(mod)}")
|
LOG.info(f"lProcessGroups group#{i} mod#{j} sig_pk={bin_to_hex(mod)}")
|
||||||
lMODS += [{"Sig_pk": bin_to_hex(mod)}]
|
lMODS += [{"Sig_pk": bin_to_hex(mod)}]
|
||||||
lIN += [{"Moderators": lMODS}]
|
if lMODS: lIN += [{"Moderators": lMODS}]
|
||||||
|
|
||||||
assert len(keys) == 4, keys
|
assert len(keys) == 4, keys
|
||||||
LOG.debug(f"lProcessGroups #{i} {repr(list(map(len, keys)))}")
|
LOG.debug(f"lProcessGroups #{i} {repr(list(map(len, keys)))}")
|
||||||
@ -308,7 +294,7 @@ def lProcessGroups(state, index, length, result, label="GROUPS"):
|
|||||||
|
|
||||||
assert len(self_info) == 4, self_info
|
assert len(self_info) == 4, self_info
|
||||||
self_nick_len, self_role, self_status, self_nick = self_info
|
self_nick_len, self_role, self_status, self_nick = self_info
|
||||||
self_nick = str(self_nick, sENC)
|
self_nick = str(self_nick, 'utf-8')
|
||||||
LOG.info(f"lProcessGroups #{i} self_nick={self_nick}")
|
LOG.info(f"lProcessGroups #{i} self_nick={self_nick}")
|
||||||
dBINS = {"Self_nick": self_nick}
|
dBINS = {"Self_nick": self_nick}
|
||||||
lIN += [{"Self_info": dBINS}]
|
lIN += [{"Self_info": dBINS}]
|
||||||
@ -341,10 +327,10 @@ The Node Info data structure contains a Transport Protocol, a Socket
|
|||||||
while length > 0:
|
while length > 0:
|
||||||
status = struct.unpack_from(">B", result, delta)[0]
|
status = struct.unpack_from(">B", result, delta)[0]
|
||||||
if status >= 128:
|
if status >= 128:
|
||||||
prot = 'TCP'
|
ipv = 'TCP'
|
||||||
af = status - 128
|
af = status - 128
|
||||||
else:
|
else:
|
||||||
prot = 'UDP'
|
ipv = 'UDP'
|
||||||
af = status
|
af = status
|
||||||
if af == 2:
|
if af == 2:
|
||||||
af = 'IPv4'
|
af = 'IPv4'
|
||||||
@ -357,10 +343,10 @@ The Node Info data structure contains a Transport Protocol, a Socket
|
|||||||
total = 1 + alen + 2 + 32
|
total = 1 + alen + 2 + 32
|
||||||
port = int(struct.unpack_from(">H", result, delta+1+alen)[0])
|
port = int(struct.unpack_from(">H", result, delta+1+alen)[0])
|
||||||
pk = bin_to_hex(result[delta+1+alen+2:delta+1+alen+2+32], 32)
|
pk = bin_to_hex(result[delta+1+alen+2:delta+1+alen+2+32], 32)
|
||||||
LOG.info(f"{label} #{relay} bytes={length} status={status} prot={prot} af={af} ip={ipaddr} port={port} pk={pk}")
|
LOG.info(f"{label} #{relay} bytes={length} status={status} ip={ipv} af={af} ip={ipaddr} port={port} pk={pk}")
|
||||||
lIN += [{"Bytes": length,
|
lIN += [{"Bytes": length,
|
||||||
"Status": status,
|
"Status": status,
|
||||||
"Prot": prot,
|
"Ip": ipv,
|
||||||
"Af": af,
|
"Af": af,
|
||||||
"Ip": ipaddr,
|
"Ip": ipaddr,
|
||||||
"Port": port,
|
"Port": port,
|
||||||
@ -389,7 +375,7 @@ def lProcessDHTnodes(state, index, length, result, label="DHTnode"):
|
|||||||
while offset < slen: #loop over nodes
|
while offset < slen: #loop over nodes
|
||||||
status = struct.unpack_from(">B", result, offset+8)[0]
|
status = struct.unpack_from(">B", result, offset+8)[0]
|
||||||
assert status < 12
|
assert status < 12
|
||||||
prot = 'UDP'
|
ipv = 'UDP'
|
||||||
if status == 2:
|
if status == 2:
|
||||||
af = 'IPv4'
|
af = 'IPv4'
|
||||||
alen = 4
|
alen = 4
|
||||||
@ -403,40 +389,32 @@ def lProcessDHTnodes(state, index, length, result, label="DHTnode"):
|
|||||||
pk = bin_to_hex(result[offset+8+1+alen+2:offset+8+1+alen+2+32], 32)
|
pk = bin_to_hex(result[offset+8+1+alen+2:offset+8+1+alen+2+32], 32)
|
||||||
|
|
||||||
LOG.info(f"{label} #{relay} status={status} ipaddr={ipaddr} port={port} {pk}")
|
LOG.info(f"{label} #{relay} status={status} ipaddr={ipaddr} port={port} {pk}")
|
||||||
lIN += [{
|
lIN += [{"status": status,
|
||||||
"Status": status,
|
"af": af,
|
||||||
"Prot": prot,
|
"ipaddr": ipaddr,
|
||||||
"Af": af,
|
"port": port,
|
||||||
"Ip": ipaddr,
|
"pk": pk}]
|
||||||
"Port": port,
|
|
||||||
"Pk": pk}]
|
|
||||||
offset += subtotal
|
offset += subtotal
|
||||||
delta += total
|
delta += total
|
||||||
length -= total
|
length -= total
|
||||||
relay += 1
|
relay += 1
|
||||||
return lIN
|
return lIN
|
||||||
|
|
||||||
def process_chunk(index, state, oArgs=None):
|
def process_chunk(index, state):
|
||||||
global lOUT, bOUT, aOUT
|
global lOUT, bOUT, iTOTAL, aOUT
|
||||||
global sENC
|
|
||||||
|
|
||||||
length = struct.unpack_from("<I", state, index)[0]
|
length = struct.unpack_from("<H", state, index)[0]
|
||||||
data_type = struct.unpack_from("<H", state, index + 4)[0]
|
data_type = struct.unpack_from("<H", state, index + 4)[0]
|
||||||
check = struct.unpack_from("<H", state, index + 6)[0]
|
|
||||||
assert check == 0x01CE, check
|
|
||||||
new_index = index + length + 8
|
new_index = index + length + 8
|
||||||
result = state[index + 8:index + 8 + length]
|
result = state[index + 8:index + 8 + length]
|
||||||
|
iTOTAL += length + 8
|
||||||
|
|
||||||
|
# plan on repacking as we read - this is just a starting point
|
||||||
|
# We'll add the results back to bOUT to see if we get what we started with.
|
||||||
|
# Then will will be able to selectively null sections or selectively edit.
|
||||||
|
bOUT += struct.pack("<H", length) + struct.pack("<H", data_type) + result
|
||||||
|
|
||||||
label = dSTATE_TYPE[data_type]
|
label = dSTATE_TYPE[data_type]
|
||||||
if oArgs.command == 'edit' and oArgs.edit:
|
|
||||||
section,num,key,val = oArgs.edit.split(',',3)
|
|
||||||
|
|
||||||
diff = index - len(bOUT)
|
|
||||||
if bDEBUG and diff > 0:
|
|
||||||
LOG.warn(f"PROCESS_CHUNK {label} index={index} bOUT={len(bOUT)} delta={diff} length={length}")
|
|
||||||
elif bDEBUG:
|
|
||||||
LOG.trace(f"PROCESS_CHUNK {label} index={index} bOUT={len(bOUT)} delta={diff} length={length}")
|
|
||||||
|
|
||||||
if data_type == MESSENGER_STATE_TYPE_NOSPAMKEYS:
|
if data_type == MESSENGER_STATE_TYPE_NOSPAMKEYS:
|
||||||
nospam = bin_to_hex(result[0:4])
|
nospam = bin_to_hex(result[0:4])
|
||||||
public_key = bin_to_hex(result[4:36])
|
public_key = bin_to_hex(result[4:36])
|
||||||
@ -447,114 +425,71 @@ def process_chunk(index, state, oArgs=None):
|
|||||||
aIN = {"Nospam": f"{nospam}",
|
aIN = {"Nospam": f"{nospam}",
|
||||||
"Public_key": f"{public_key}",
|
"Public_key": f"{public_key}",
|
||||||
"Private_key": f"{private_key}"}
|
"Private_key": f"{private_key}"}
|
||||||
lOUT += [{label: aIN}]; aOUT.update({label: aIN})
|
lOUT += [{"NOSPAMKEYS": aIN}]; aOUT.update({"NOSPAMKEYS": aIN})
|
||||||
|
|
||||||
elif data_type == MESSENGER_STATE_TYPE_DHT:
|
elif data_type == MESSENGER_STATE_TYPE_DHT:
|
||||||
LOG.debug(f"process_chunk {label} length={length}")
|
LOG.debug(f"process_chunk {dSTATE_TYPE[data_type]} length={length}")
|
||||||
lIN = lProcessDHTnodes(state, index, length, result)
|
lIN = lProcessDHTnodes(state, index, length, result)
|
||||||
lOUT += [{label: lIN}]; aOUT.update({label: lIN})
|
if lIN: lOUT += [{"DHT": lIN}]; aOUT.update({"DHT": lIN})
|
||||||
|
|
||||||
elif data_type == MESSENGER_STATE_TYPE_FRIENDS:
|
elif data_type == MESSENGER_STATE_TYPE_FRIENDS:
|
||||||
LOG.info(f"{label} {length // 2216} FRIENDS {length % 2216}")
|
LOG.debug(f"TODO process_chunk {length // 2216} FRIENDS {length} {length % 2216}")
|
||||||
lIN = lProcessFriends(state, index, length, result)
|
lIN = lProcessFriends(state, index, length, result)
|
||||||
lOUT += [{label: lIN}]; aOUT.update({label: lIN})
|
if lIN: lOUT += [{"FRIENDS": lIN}]; aOUT.update({"FRIENDS": lIN})
|
||||||
|
|
||||||
elif data_type == MESSENGER_STATE_TYPE_NAME:
|
elif data_type == MESSENGER_STATE_TYPE_NAME:
|
||||||
name = str(result, sENC)
|
name = str(state[index + 8:index + 8 + length], 'utf-8')
|
||||||
LOG.info(f"{label} Nick_name = " +name)
|
LOG.info("Nick_name = " +name)
|
||||||
aIN = {"Nick_name": name}
|
aIN = {"NAME": name}
|
||||||
lOUT += [{label: aIN}]; aOUT.update({label: aIN})
|
lOUT += [{"Nick_name": aIN}]; aOUT.update({"Nick_name": aIN})
|
||||||
if oArgs.command == 'edit' and section == label:
|
|
||||||
## NAME,.,Nick_name,str
|
|
||||||
if key == "Nick_name":
|
|
||||||
result = bytes(val, sENC)
|
|
||||||
length = len(result)
|
|
||||||
LOG.info(f"{label} {key} EDITED to {val}")
|
|
||||||
|
|
||||||
elif data_type == MESSENGER_STATE_TYPE_STATUSMESSAGE:
|
elif data_type == MESSENGER_STATE_TYPE_STATUSMESSAGE:
|
||||||
mess = str(result, sENC)
|
mess = str(state[index + 8:index + 8 + length], 'utf-8')
|
||||||
LOG.info(f"{label} StatusMessage = " +mess)
|
LOG.info(f"StatusMessage = " +mess)
|
||||||
aIN = {"Status_message": mess}
|
aIN = {"Status_message": mess}
|
||||||
lOUT += [{label: aIN}]; aOUT.update({label: aIN})
|
lOUT += [{"STATUSMESSAGE": aIN}]; aOUT.update({"STATUSMESSAGE": aIN})
|
||||||
if oArgs.command == 'edit' and section == label:
|
|
||||||
## STATUSMESSAGE,.,Status_message,str
|
|
||||||
if key == "Status_message":
|
|
||||||
result = bytes(val, sENC)
|
|
||||||
length = len(result)
|
|
||||||
LOG.info(f"{label} {key} EDITED to {val}")
|
|
||||||
|
|
||||||
elif data_type == MESSENGER_STATE_TYPE_STATUS:
|
elif data_type == MESSENGER_STATE_TYPE_STATUS:
|
||||||
# 1 uint8_t status (0 = online, 1 = away, 2 = busy)
|
# 1 uint8_t status (0 = online, 1 = away, 2 = busy)
|
||||||
dStatus = {0: 'online', 1: 'away', 2: 'busy'}
|
dStatus = {0: 'online', 1: 'away', 2: 'busy'}
|
||||||
status = struct.unpack_from(">b", state, index)[0]
|
status = struct.unpack_from(">b", state, index)[0]
|
||||||
status = dStatus[status]
|
status = dStatus[status]
|
||||||
LOG.info(f"{label} = " +status)
|
LOG.info(f"{dSTATE_TYPE[data_type]} = " +status)
|
||||||
aIN = {f"Online_status": status}
|
aIN = {f"Online_status": status}
|
||||||
lOUT += [{"STATUS": aIN}]; aOUT.update({"STATUS": aIN})
|
lOUT += [{"STATUS": aIN}]; aOUT.update({"STATUS": aIN})
|
||||||
if oArgs.command == 'edit' and section == label:
|
|
||||||
## STATUS,.,Online_status,int
|
|
||||||
if key == "Online_status":
|
|
||||||
result = struct.pack(">b", int(val))
|
|
||||||
length = len(result)
|
|
||||||
LOG.info(f"{label} {key} EDITED to {val}")
|
|
||||||
|
|
||||||
elif data_type == MESSENGER_STATE_TYPE_GROUPS:
|
elif data_type == MESSENGER_STATE_TYPE_GROUPS:
|
||||||
if length > 0:
|
lIN = lProcessGroups(state, index, length, result)
|
||||||
lIN = lProcessGroups(state, index, length, result, label)
|
if lIN: lOUT += [{"GROUPS": lIN}]; aOUT.update({"GROUPS": lIN})
|
||||||
else:
|
|
||||||
lIN = []
|
|
||||||
LOG.info(f"NO {label}")
|
|
||||||
lOUT += [{label: lIN}]; aOUT.update({label: lIN})
|
|
||||||
|
|
||||||
elif data_type == MESSENGER_STATE_TYPE_TCP_RELAY:
|
elif data_type == MESSENGER_STATE_TYPE_TCP_RELAY:
|
||||||
if length > 0:
|
lIN = lProcessNodeInfo(state, index, length, result, "TCPnode")
|
||||||
lIN = lProcessNodeInfo(state, index, length, result, "TCPnode")
|
if lIN: lOUT += [{"TCP_RELAY": lIN}]; aOUT.update({"TCP_RELAY": lIN})
|
||||||
else:
|
|
||||||
lIN = []
|
|
||||||
LOG.info(f"NO {label}")
|
|
||||||
lOUT += [{label: lIN}]; aOUT.update({label: lIN})
|
|
||||||
|
|
||||||
elif data_type == MESSENGER_STATE_TYPE_PATH_NODE:
|
elif data_type == MESSENGER_STATE_TYPE_PATH_NODE:
|
||||||
#define NUM_SAVED_PATH_NODES 8
|
#define NUM_SAVED_PATH_NODES 8
|
||||||
assert length % 8 == 0, length
|
assert length % 8 == 0, length
|
||||||
LOG.debug(f"process_chunk {label} bytes={length}")
|
LOG.debug(f"TODO process_chunk {dSTATE_TYPE[data_type]} bytes={length}")
|
||||||
lIN = lProcessNodeInfo(state, index, length, result, "PATHnode")
|
lIN = lProcessNodeInfo(state, index, length, result, "PATHnode")
|
||||||
lOUT += [{label: lIN}]; aOUT.update({label: lIN})
|
if lIN: lOUT += [{label: lIN}]; aOUT.update({label: lIN})
|
||||||
|
|
||||||
elif data_type == MESSENGER_STATE_TYPE_CONFERENCES:
|
elif data_type == MESSENGER_STATE_TYPE_CONFERENCES:
|
||||||
lIN = []
|
|
||||||
if length > 0:
|
if length > 0:
|
||||||
LOG.debug(f"TODO process_chunk {label} bytes={length}")
|
LOG.debug(f"TODO process_chunk {dSTATE_TYPE[data_type]} bytes={length}")
|
||||||
else:
|
else:
|
||||||
LOG.info(f"NO {label}")
|
LOG.info(f"NO {dSTATE_TYPE[data_type]}")
|
||||||
lOUT += [{label: []}]; aOUT.update({label: []})
|
lOUT += [{"CONFERENCES": []}]; aOUT.update({"CONFERENCES": []})
|
||||||
|
|
||||||
elif data_type != MESSENGER_STATE_TYPE_END:
|
elif data_type != MESSENGER_STATE_TYPE_END:
|
||||||
LOG.error("UNRECOGNIZED datatype={datatype}")
|
LOG.warn("UNRECOGNIZED datatype={datatype}")
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
LOG.info("END") # That's all folks...
|
LOG.info("END") # That's all folks...
|
||||||
# drop through
|
|
||||||
|
|
||||||
# We repack as we read: or edit as we parse; simply edit result and length.
|
|
||||||
# We'll add the results back to bOUT to see if we get what we started with.
|
|
||||||
# Then will will be able to selectively null sections or selectively edit.
|
|
||||||
assert length == len(result), length
|
|
||||||
bOUT += struct.pack("<I", length) + \
|
|
||||||
struct.pack("<H", data_type) + \
|
|
||||||
struct.pack("<H", check) + \
|
|
||||||
result
|
|
||||||
|
|
||||||
if data_type == MESSENGER_STATE_TYPE_END or \
|
|
||||||
index + 8 >= len(state):
|
|
||||||
diff = len(bSAVE) - len(bOUT)
|
|
||||||
if oArgs.command != 'edit' and diff > 0:
|
|
||||||
# if short repacking as we read - tox_profile is padded with nulls
|
|
||||||
LOG.warn(f"PROCESS_CHUNK bSAVE={len(bSAVE)} bOUT={len(bOUT)} delta={diff}")
|
|
||||||
return
|
return
|
||||||
|
|
||||||
process_chunk(new_index, state, oArgs)
|
# failsafe
|
||||||
|
if index + 8 >= len(state): return
|
||||||
|
process_chunk(new_index, state)
|
||||||
|
|
||||||
def bAreWeConnected():
|
def bAreWeConnected():
|
||||||
# FixMe: Linux
|
# FixMe: Linux
|
||||||
@ -625,7 +560,7 @@ def vBashFileNmapUdp():
|
|||||||
def vOsSystemNmapUdp(l, oArgs):
|
def vOsSystemNmapUdp(l, oArgs):
|
||||||
iErrs = 0
|
iErrs = 0
|
||||||
for elt in aOUT["DHT"]:
|
for elt in aOUT["DHT"]:
|
||||||
cmd = f"sudo nmap -Pn -n -sU -p U:{elt['Port']} {elt['Ip']}"
|
cmd = f"sudo nmap -Pn -n -sU -p U:{elt['port']} {elt['ipaddr']}"
|
||||||
iErrs += os.system(cmd +f" >> {oArgs.output} 2>&1")
|
iErrs += os.system(cmd +f" >> {oArgs.output} 2>&1")
|
||||||
if iErrs:
|
if iErrs:
|
||||||
LOG.warn(f"{oArgs.info} {iErrs} ERRORs to {oArgs.output}")
|
LOG.warn(f"{oArgs.info} {iErrs} ERRORs to {oArgs.output}")
|
||||||
@ -637,7 +572,7 @@ def vOsSystemNmapUdp(l, oArgs):
|
|||||||
def vOsSystemNmapTcp(l, oArgs):
|
def vOsSystemNmapTcp(l, oArgs):
|
||||||
iErrs = 0
|
iErrs = 0
|
||||||
for elt in l:
|
for elt in l:
|
||||||
cmd = f"sudo nmap -Pn -n -sT -p T:{elt['Port']} {elt['Ip']}"
|
cmd = f"sudo nmap -Pn -n -sT -p T:{elt['port']} {elt['ipaddr']}"
|
||||||
print(f"{oArgs.info} NO errors to {oArgs.output}")
|
print(f"{oArgs.info} NO errors to {oArgs.output}")
|
||||||
iErrs += os.system(cmd +f" >> {oArgs.output} 2>&1")
|
iErrs += os.system(cmd +f" >> {oArgs.output} 2>&1")
|
||||||
if iErrs:
|
if iErrs:
|
||||||
@ -677,14 +612,12 @@ def oMainArgparser(_=None):
|
|||||||
parser.add_argument('--output', type=str, default='',
|
parser.add_argument('--output', type=str, default='',
|
||||||
help='Destination for info/decrypt - defaults to stderr')
|
help='Destination for info/decrypt - defaults to stderr')
|
||||||
parser.add_argument('--command', type=str, default='info',
|
parser.add_argument('--command', type=str, default='info',
|
||||||
choices=['info', 'decrypt', 'nodes', 'edit'],
|
choices=['info', 'decrypt', 'nodes'],
|
||||||
required=True,
|
# required=True,
|
||||||
help='Action command - default: info')
|
help='Action command - default: info')
|
||||||
parser.add_argument('--edit', type=str, default='',
|
|
||||||
help='comma seperated SECTION,key,value - unfinished')
|
|
||||||
parser.add_argument('--indent', type=int, default=2,
|
parser.add_argument('--indent', type=int, default=2,
|
||||||
help='Indent for yaml/json/pprint')
|
help='Indent for yaml/json/pprint')
|
||||||
choices=['info', 'save', 'repr', 'yaml','json', 'pprint']
|
choices=['info', 'repr', 'yaml','json', 'pprint']
|
||||||
if bHAVE_NMAP: choices += ['nmap_tcp', 'nmap_udp', 'nmap_onion']
|
if bHAVE_NMAP: choices += ['nmap_tcp', 'nmap_udp', 'nmap_onion']
|
||||||
parser.add_argument('--info', type=str, default='info',
|
parser.add_argument('--info', type=str, default='info',
|
||||||
choices=choices,
|
choices=choices,
|
||||||
@ -700,26 +633,19 @@ def oMainArgparser(_=None):
|
|||||||
help='Action for nodes command (requires jq)')
|
help='Action for nodes command (requires jq)')
|
||||||
parser.add_argument('--download_nodes_url', type=str,
|
parser.add_argument('--download_nodes_url', type=str,
|
||||||
default='https://nodes.tox.chat/json')
|
default='https://nodes.tox.chat/json')
|
||||||
parser.add_argument('--encoding', type=str, default=sENC)
|
|
||||||
parser.add_argument('profile', type=str, nargs='?', default=None,
|
parser.add_argument('profile', type=str, nargs='?', default=None,
|
||||||
help='tox profile file - may be encrypted')
|
help='tox profile file - may be encrypted')
|
||||||
return parser
|
return parser
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
iTOTAL = 0
|
||||||
lArgv = sys.argv[1:]
|
lArgv = sys.argv[1:]
|
||||||
parser = oMainArgparser()
|
parser = oMainArgparser()
|
||||||
oArgs = parser.parse_args(lArgv)
|
oArgs = parser.parse_args(lArgv)
|
||||||
if oArgs.command in ['edit'] and oArgs.edit == 'help':
|
|
||||||
l = list(dSTATE_TYPE.values())
|
|
||||||
l.remove('END')
|
|
||||||
print('Available Sections: ' +repr(l))
|
|
||||||
print('Supported Quads: section,num,key,type ' +sEDIT_HELP)
|
|
||||||
sys.exit(0)
|
|
||||||
|
|
||||||
sFile = oArgs.profile
|
sFile = oArgs.profile
|
||||||
assert os.path.isfile(sFile), sFile
|
assert os.path.isfile(sFile), sFile
|
||||||
|
|
||||||
sENC = oArgs.encoding
|
|
||||||
vSetupLogging()
|
vSetupLogging()
|
||||||
|
|
||||||
bSAVE = open(sFile, 'rb').read()
|
bSAVE = open(sFile, 'rb').read()
|
||||||
@ -733,11 +659,11 @@ if __name__ == '__main__':
|
|||||||
|
|
||||||
oStream = None
|
oStream = None
|
||||||
if oArgs.command == 'decrypt':
|
if oArgs.command == 'decrypt':
|
||||||
assert oArgs.output, "--output required for this command"
|
if oArgs.output:
|
||||||
oStream = open(oArgs.output, 'wb')
|
oStream = open(oArgs.output, 'rb')
|
||||||
iRet = oStream.write(bSAVE)
|
else:
|
||||||
LOG.info(f"Wrote {iRet} to {oArgs.output}")
|
oStream = sys.stdout
|
||||||
iRet = 0
|
oStream.write(bSAVE)
|
||||||
|
|
||||||
elif oArgs.command == 'nodes':
|
elif oArgs.command == 'nodes':
|
||||||
iRet = -1
|
iRet = -1
|
||||||
@ -795,7 +721,7 @@ if __name__ == '__main__':
|
|||||||
oStream.write(bSAVE)
|
oStream.write(bSAVE)
|
||||||
else:
|
else:
|
||||||
oStream = sys.stdout
|
oStream = sys.stdout
|
||||||
oStream.write(str(bSAVE, sENC))
|
oStream.write(str(bSAVE, 'utf-8'))
|
||||||
iRet = -1
|
iRet = -1
|
||||||
LOG.info(f"downloaded list of nodes saved to {oStream}")
|
LOG.info(f"downloaded list of nodes saved to {oStream}")
|
||||||
|
|
||||||
@ -804,68 +730,49 @@ if __name__ == '__main__':
|
|||||||
elif iRet == 0:
|
elif iRet == 0:
|
||||||
LOG.info(f"{oArgs.nodes} iRet={iRet} to {oArgs.output}")
|
LOG.info(f"{oArgs.nodes} iRet={iRet} to {oArgs.output}")
|
||||||
|
|
||||||
elif oArgs.command in ['info', 'edit']:
|
elif oArgs.command == 'info':
|
||||||
if oArgs.command in ['edit']:
|
bOUT = b'\x00\x00\x00\x00\x1f\x1b\xed\x15'
|
||||||
assert oArgs.output, "--output required for this command"
|
|
||||||
assert oArgs.edit != '', "--edit required for this command"
|
|
||||||
elif oArgs.command == 'info':
|
|
||||||
# assert oArgs.info != '', "--info required for this command"
|
|
||||||
if oArgs.info in ['save', 'yaml', 'json', 'repr', 'pprint']:
|
|
||||||
assert oArgs.output, "--output required for this command"
|
|
||||||
|
|
||||||
# toxEsave
|
# toxEsave
|
||||||
assert bSAVE[:8] == bMARK, "Not a Tox profile"
|
assert bSAVE[:8] == bOUT, "Not a Tox profile"
|
||||||
bOUT = bMARK
|
|
||||||
|
|
||||||
iErrs = 0
|
iErrs = 0
|
||||||
process_chunk(len(bOUT), bSAVE, oArgs)
|
lOUT = []; aOUT = {}
|
||||||
if not bOUT:
|
process_chunk(len(bOUT), bSAVE)
|
||||||
LOG.error(f"{oArgs.command} NO bOUT results")
|
if lOUT:
|
||||||
else:
|
if oArgs.output:
|
||||||
oStream = None
|
oStream = open(oArgs.output, 'wb')
|
||||||
LOG.debug(f"command={oArgs.command} len bOUT={len(bOUT)} results")
|
else:
|
||||||
|
oStream = sys.stdout
|
||||||
if oArgs.command in ['edit'] or oArgs.info in ['save']:
|
if oArgs.info == 'yaml' and yaml:
|
||||||
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
yaml.dump(aOUT, stream=oStream, indent=oArgs.indent)
|
||||||
oStream = open(oArgs.output, 'wb', encoding=None)
|
oStream.write('\n')
|
||||||
if oStream.write(bOUT) > 0: iRet = 0
|
elif oArgs.info == 'json' and json:
|
||||||
LOG.info(f"{oArgs.info}ed iRet={iRet} to {oArgs.output}")
|
json.dump(aOUT, oStream, indent=oArgs.indent)
|
||||||
|
oStream.write('\n')
|
||||||
|
elif oArgs.info == 'repr':
|
||||||
|
oStream.write(repr(aOUT))
|
||||||
|
oStream.write('\n')
|
||||||
|
elif oArgs.info == 'pprint':
|
||||||
|
pprint(aOUT, stream=oStream, indent=oArgs.indent, width=80)
|
||||||
elif oArgs.info == 'info':
|
elif oArgs.info == 'info':
|
||||||
pass
|
pass
|
||||||
elif oArgs.info == 'yaml' and yaml:
|
|
||||||
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
|
||||||
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
|
||||||
yaml.dump(aOUT, stream=oStream, indent=oArgs.indent)
|
|
||||||
if oStream.write('\n') > 0: iRet = 0
|
|
||||||
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
|
||||||
elif oArgs.info == 'json' and json:
|
|
||||||
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
|
||||||
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
|
||||||
json.dump(aOUT, oStream, indent=oArgs.indent)
|
|
||||||
if oStream.write('\n') > 0: iRet = 0
|
|
||||||
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
|
||||||
elif oArgs.info == 'repr':
|
|
||||||
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
|
||||||
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
|
||||||
if oStream.write(repr(bOUT)) > 0: iRet = 0
|
|
||||||
if oStream.write('\n') > 0: iRet = 0
|
|
||||||
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
|
||||||
elif oArgs.info == 'pprint':
|
|
||||||
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
|
||||||
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
|
||||||
pprint(aOUT, stream=oStream, indent=oArgs.indent, width=80)
|
|
||||||
iRet = 0
|
|
||||||
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
|
||||||
elif oArgs.info == 'nmap_tcp' and bHAVE_NMAP:
|
elif oArgs.info == 'nmap_tcp' and bHAVE_NMAP:
|
||||||
assert oArgs.output, "--output required for this command"
|
assert oArgs.output, "--output required for this command"
|
||||||
|
oStream.close()
|
||||||
vOsSystemNmapTcp(aOUT["TCP_RELAY"], oArgs)
|
vOsSystemNmapTcp(aOUT["TCP_RELAY"], oArgs)
|
||||||
elif oArgs.info == 'nmap_udp' and bHAVE_NMAP:
|
elif oArgs.info == 'nmap_udp' and bHAVE_NMAP:
|
||||||
assert oArgs.output, "--output required for this command"
|
assert oArgs.output, "--output required for this command"
|
||||||
|
oStream.close()
|
||||||
vOsSystemNmapUdp(aOUT["DHT"], oArgs)
|
vOsSystemNmapUdp(aOUT["DHT"], oArgs)
|
||||||
elif oArgs.info == 'nmap_onion' and bHAVE_NMAP:
|
elif oArgs.info == 'nmap_onion' and bHAVE_NMAP:
|
||||||
assert oArgs.output, "--output required for this command"
|
assert oArgs.output, "--output required for this command"
|
||||||
|
oStream.close()
|
||||||
vOsSystemNmapUdp(aOUT["PATH_NODE"], oArgs)
|
vOsSystemNmapUdp(aOUT["PATH_NODE"], oArgs)
|
||||||
|
|
||||||
|
# were short repacking as we read - 446 bytes missing
|
||||||
|
LOG.debug(f"len bSAVE={len(bSAVE)} bOUT={len(bOUT)} delta={len(bSAVE) - len(bOUT)} iTOTAL={iTOTAL}")
|
||||||
|
|
||||||
|
|
||||||
if oStream and oStream != sys.stdout and oStream != sys.stderr:
|
if oStream and oStream != sys.stdout and oStream != sys.stderr:
|
||||||
oStream.close()
|
oStream.close()
|
||||||
|
|
@ -1,120 +0,0 @@
|
|||||||
#!/bin/sh -e
|
|
||||||
# -*- mode: sh; fill-column: 75; tab-width: 8; coding: utf-8-unix -*-
|
|
||||||
|
|
||||||
PREFIX=/o/var/local/src
|
|
||||||
EXE=python3.sh
|
|
||||||
WRAPPER=$PREFIX/toxygen_wrapper
|
|
||||||
|
|
||||||
[ -f /usr/local/bin/usr_local_tput.bash ] && \
|
|
||||||
. /usr/local/bin/usr_local_tput.bash || {
|
|
||||||
DEBUG() { echo DEBUG $* ; }
|
|
||||||
INFO() { echo INFO $* ; }
|
|
||||||
WARN() { echo WARN $* ; }
|
|
||||||
ERROR() { echo ERROR $* ; }
|
|
||||||
}
|
|
||||||
|
|
||||||
set -- -e
|
|
||||||
target=$PREFIX/tox_profile/logging_tox_savefile.py
|
|
||||||
[ -s $target ] || exit 1
|
|
||||||
|
|
||||||
tox=$HOME/.config/tox/toxic_profile.tox
|
|
||||||
[ -s $tox ] || exit 2
|
|
||||||
|
|
||||||
json=$HOME/.config/tox/DHTnodes.json
|
|
||||||
[ -s $json ] || exit 3
|
|
||||||
|
|
||||||
[ -d $WRAPPER ] || { ERROR wrapper is required https://git.plastiras.org/emdee/toxygen_wrapper ; exit 5 ; }
|
|
||||||
export PYTHONPATH=$WRAPPER
|
|
||||||
|
|
||||||
which jq > /dev/null && HAVE_JQ=1 || HAVE_JQ=0
|
|
||||||
which nmap > /dev/null && HAVE_NMAP=1 || HAVE_NMAP=0
|
|
||||||
|
|
||||||
sudo rm -f /tmp/toxic_profile.* /tmp/toxic_nodes.*
|
|
||||||
|
|
||||||
[ "$HAVE_JQ" = 0 ] || \
|
|
||||||
jq . < $json >/tmp/toxic_nodes.json || { ERROR jq $json ; exit 4 ; }
|
|
||||||
[ -f /tmp/toxic_nodes.json ] || cp -p $json /tmp/toxic_nodes.json
|
|
||||||
json=/tmp/toxic_nodes.json
|
|
||||||
|
|
||||||
# required password
|
|
||||||
INFO decrypt /tmp/toxic_profile.bin
|
|
||||||
$EXE $target --command decrypt --output /tmp/toxic_profile.bin $tox || exit 11
|
|
||||||
[ -s /tmp/toxic_profile.bin ] || exit 12
|
|
||||||
|
|
||||||
tox=/tmp/toxic_profile.bin
|
|
||||||
INFO info $tox
|
|
||||||
$EXE $target --command info --info info $tox 2>/tmp/toxic_profile.info || exit 13
|
|
||||||
[ -s /tmp/toxic_profile.info ] || exit 14
|
|
||||||
|
|
||||||
INFO /tmp/toxic_profile.save
|
|
||||||
$EXE $target --command info --info save --output /tmp/toxic_profile.save $tox 2>/dev/null || exit 15
|
|
||||||
[ -s /tmp/toxic_profile.save ] || exit 16
|
|
||||||
|
|
||||||
for the_tox in /tmp/toxic_profile.save ; do
|
|
||||||
the_base=`echo $the_tox | sed -e 's/.save$//' -e 's/.tox$//'`
|
|
||||||
for elt in json yaml pprint repr ; do
|
|
||||||
INFO $the_base.$elt
|
|
||||||
[ "$DEBUG" != 1 ] || echo DEBUG $EXE $target \
|
|
||||||
--command info --info $elt \
|
|
||||||
--output $the_base.$elt $the_tox
|
|
||||||
$EXE $target --command info --info $elt \
|
|
||||||
--output $the_base.$elt $the_tox 2>/dev/null || exit 20
|
|
||||||
[ -s $the_base.$elt ] || exit 21
|
|
||||||
done
|
|
||||||
|
|
||||||
$EXE $target --command edit --edit help $the_tox 2>/dev/null || exit 22
|
|
||||||
|
|
||||||
INFO $the_base.edit1 'STATUSMESSAGE,.,Status_message,Toxxed on Toxic'
|
|
||||||
$EXE $target --command edit --edit 'STATUSMESSAGE,.,Status_message,Toxxed on Toxic' \
|
|
||||||
--output $the_base.edit1.tox $the_tox 2>&1|grep EDIT
|
|
||||||
[ -s $the_base.edit1.tox ] || exit 23
|
|
||||||
$EXE $target --command info $the_base.edit1.tox 2>&1|grep Toxxed || exit 24
|
|
||||||
|
|
||||||
INFO $the_base.edit2 'NAME,.,Nick_name,FooBar'
|
|
||||||
$EXE $target --command edit --edit 'NAME,.,Nick_name,FooBar' \
|
|
||||||
--output $the_base.edit2.tox $the_tox 2>&1|grep EDIT
|
|
||||||
[ -s $the_base.edit2.tox ] || exit 25
|
|
||||||
$EXE $target --command info $the_base.edit2.tox 2>&1|grep FooBar || exit 26
|
|
||||||
|
|
||||||
done
|
|
||||||
|
|
||||||
the_tox=$json
|
|
||||||
the_base=`echo $the_tox | sed -e 's/.save$//' -e 's/.json$//'`
|
|
||||||
[ "$HAVE_JQ" = 0 ] || \
|
|
||||||
for nmap in select_tcp select_udp select_version ; do
|
|
||||||
INFO $the_base.$nmap
|
|
||||||
$EXE $target --command nodes --nodes $nmap \
|
|
||||||
--output $the_base.$nmap.json $the_tox || exit 31
|
|
||||||
[ -s $the_base.$nmap.json ] || exit 32
|
|
||||||
done
|
|
||||||
|
|
||||||
grep '"status_tcp": false' $the_base.select_tcp.json && exit 33
|
|
||||||
grep '"status_udp": false' $the_base.select_udp.json && exit 34
|
|
||||||
|
|
||||||
ls -l /tmp/toxic_profile.* /tmp/toxic_nodes.*
|
|
||||||
|
|
||||||
/usr/local/bin/proxy_ping_test.bash tor || exit 0
|
|
||||||
|
|
||||||
the_tox=$tox
|
|
||||||
the_base=`echo $the_tox | sed -e 's/.save$//' -e 's/.tox$//'`
|
|
||||||
[ "$HAVE_JQ" = 0 ] || \
|
|
||||||
[ "$HAVE_NMAP" = 0 ] || \
|
|
||||||
for nmap in nmap_tcp nmap_udp nmap_onion ; do
|
|
||||||
INFO $the_base.$nmap
|
|
||||||
$EXE $target --command info --info $nmap \
|
|
||||||
--output $the_base.$nmap $the_tox.json || exit 40
|
|
||||||
[ -s $the_base.$nmap.json ] || exit 41
|
|
||||||
done
|
|
||||||
|
|
||||||
the_json=$json
|
|
||||||
the_base=`echo $json | sed -e 's/.save$//' -e 's/.json$//'`
|
|
||||||
[ "$HAVE_JQ" = 0 ] || \
|
|
||||||
for nmap in nmap_tcp nmap_udp ; do
|
|
||||||
INFO $the_base.$nmap
|
|
||||||
$EXE $target --command nodes --nodes $nmap \
|
|
||||||
--output $the_base.$nmap.json $the_json || exit 51
|
|
||||||
[ -s $the_base.$nmap.json ] || exit 52
|
|
||||||
done
|
|
||||||
|
|
||||||
exit 0
|
|
||||||
|
|
Loading…
x
Reference in New Issue
Block a user