Bitcoin / DefiChain RPC rawTransaction - cryptography

Hello i try to figure out how to encode normal rpc calls in an raw transaction.
Till now my problem is that i dont know what the hex must contain as string.
f.e.
rpc command: "method: 'compositeswap' {'from':'MyAddress','tokenFrom':'MyToken1','amountFrom':'0.001','to':'Address','tokenTo':'Token2','maxPrice':'0.01'}"
There seems to be OP-Codes to exists like OP_DEFI_TX_COMPOSITE_SWAP
how does the chain knows to execute an operation with params?
i tried to figure it out by trying to decode an actual transaction from the test wallet:
from the part
"scriptPubKey": {
"asm": "OP_RETURN 446654786917a914721d5b1c58d38af7b6797b385b6ac291b002f88c870000e1f5050000000017a914721d5b1c58d38af7b6797b385b6ac291b002f88c870b0000000000000000c74e71050000000000",
"hex": "6a4c50446654786917a914721d5b1c58d38af7b6797b385b6ac291b002f88c870000e1f5050000000017a914721d5b1c58d38af7b6797b385b6ac291b002f88c870b0000000000000000c74e71050000000000",
"type": "nulldata"
},
of
{
"txid": "9a98d693d4c5107647131ee1bb7a5b0cce0fcdbe390c9609a71f4b71157e39dc",
"hash": "a18a6fd4abf0ba1885febcf37a333b1d1f34b4de954f13538a6619b1d7b20042",
"version": 4,
"size": 309,
"vsize": 228,
"weight": 909,
"locktime": 0,
"vin": [
{
"txid": "9e1140197138ba5e247ab3b3f1f4881bf7be624a939073d9795242caf3634409",
"vout": 1,
"scriptSig": {
"asm": "0014451be7ab94ccd7eff0a33ab8fe997a75c62eb7dd",
"hex": "160014451be7ab94ccd7eff0a33ab8fe997a75c62eb7dd"
},
"txinwitness": [
"30440220552d8aa4e129f566bfe083b780e1dcf67a3ca0176e07407912451371f597bc620220698c6ac483e021b78c7d7bf42e14f1c619618d1941cf12fec7cf8302ece6d3ae01",
"03c7d2dbe5ee429de5d88e8594cda6ceb84268ebbf9d0b16b33664e999307f33e8"
],
"sequence": 4294967295
}
],
"vout": [
{
"value": 0,
"n": 0,
"scriptPubKey": {
"asm": "OP_RETURN 446654786917a914721d5b1c58d38af7b6797b385b6ac291b002f88c870000e1f5050000000017a914721d5b1c58d38af7b6797b385b6ac291b002f88c870b0000000000000000c74e71050000000000",
"hex": "6a4c50446654786917a914721d5b1c58d38af7b6797b385b6ac291b002f88c870000e1f5050000000017a914721d5b1c58d38af7b6797b385b6ac291b002f88c870b0000000000000000c74e71050000000000",
"type": "nulldata"
},
"tokenId": 0
},
{
"value": 183.03901748,
"n": 1,
"scriptPubKey": {
"asm": "OP_HASH160 721d5b1c58d38af7b6797b385b6ac291b002f88c OP_EQUAL",
"hex": "a914721d5b1c58d38af7b6797b385b6ac291b002f88c87",
"reqSigs": 1,
"type": "scripthash",
"addresses": [
"tgfbETCK2kYyvsnHbS41v9aicQzAXLsz9B"
]
},
"tokenId": 0
}
]
}
the
OP_RETURN 446654786917a914721d5b1c58d38af7b6797b385b6ac291b002f88c870000e1f5050000000017a914721d5b1c58d38af7b6797b385b6ac291b002f88c870b0000000000000000c74e71050000000000
cant be decoded sucessfully back into a string.
does s.o know what kind of encoding it is?
tried
bytess=bytes.fromhex("446654786917a914721d5b1c58d38af7b6797b385b6ac291b002f88c870000e1f5050000000017a914721d5b1c58d38af7b6797b385b6ac291b002f88c870b0000000000000000c74e71050000000000")
print(bytess.decode("latin-1"))
but only get
INFO (MainThread) 14.05.2022 22:02:39 DfTxi©r
INFO (MainThread) 14.05.2022 22:02:39 [
INFO (MainThread) 14.05.2022 22:02:39 XÓŠ÷¶y{8[j‘°øŒ‡ áõ ©r

You can't just decode the string after the OP Code. It is a concatenation of several input parameters depending on the function.
In your example
6a4c50446654786917a914721d5b1c58d38af7b6797b385b6ac291b002f88c870000e1f5050000000017a914721d5b1c58d38af7b6797b385b6ac291b002f88c870b0000000000000000c74e71050000000000
6a is OP_RETURN
4c50 is Number of bytes following (in HEX)
44665478 is "DfTx" in HEX
6917a914721d5b1c58d38af7b6797b385b6ac291b002f88c870000e1f5050000000017a914721d5b1c58d38af7b6797b385b6ac291b002f88c870b0000000000000000c74e71050000000000 = the parameters from your json. For your address you need Base58 (in this case) or Bech32 encoding to get the hex (some backward engineering necessary).
The part with lots of zeros is the amount/maxPrice.
The tokens are inserted by their IDs not the name.
I am thinking about writing a guide. So far, I am afraid you have to do some backward engineering. Hope this helps a little bit

Related

Dataframe rows to json

I have a dataframe with many rows and columns of the form (this is an oversimplified dataframe):
id dur proto service state attack_cat label
0 1 0.121478 tcp dns FIN Normal 0
1 2 4.287901 udp ftp INT Exploits 1
I would like to write all the rows of this dataframe as json items, like for example, for the first row:
{"type": "event",
"subtype": "",
"datatype": "Instance",
"domain": "Cyber",
"created": str(datetime.datetime.now()),
"details": {id: 1,
dur: 0.121478,
proto: tcp,
service: dns,
state: FIN,
attack_cat: Normal,
label:0}
}
I tried to do something like:
{"type": "event",
"subtype": "",
"datatype": "Instance",
"domain": "Cyber",
"created": str(datetime.datetime.now()),
"details": dataframe.loc[i].to_dict()
}
and do a for loop through all the rows, but it gives me the error
TypeError: unhashable type: 'dict'
There are actually two problems, as far as I can see.
Generally, a python dictionary can be the value of a dictionary. For example,
dict = {
"first_name": "robert",
"last_name": "ren",
"hist": {"today": 1, "yesterday": 2}
}
The value of hist is a dict. This means, that from this perspective your code seems to be ok. However, a dictionary can not be the key, nor can you nest one dict in another one. For example,
{{}}
gives you the following error
TypeError: unhashable type: 'dict'
If you want to solve this problem, I recommend that you print out each line as you create the dicts to see where it breaks. That said, I guess that "details": dataframe.loc[i].to_dict() causes you the trouble.
Second, your first dict is invalid as the keys are not strings. This should be
{
"type": "event",
"subtype": "",
"datatype": "Instance",
"domain": "Cyber",
"created": str(datetime.datetime.now()),
"details": {
"id": 1,
"dur": 0.121478,
"proto": tcp,
"service": dns,
"state": FIN,
"attack_cat": Normal,
"label": 0
}
}
Assuming that the variables are defined variables.
EDIT
The following code works on my machine.
import pandas as pd
df = pd.DataFrame({
"id": [1],
"dur": [0.12],
"proto": ["tcp"],
"service": ["dns"],
"state": ["FIN"],
"attack_cat": ["Normal"],
"label": [0]
})
row_list_as_dict = []
for idx, row in df.iterrows():
row_list_as_dict.append({
"type": "event",
"subtype": "",
"datatype": "Instance",
"domain": "Cyber",
"details": {
'id': row["id"],
'dur': row["dur"],
'proto': row["proto"],
'service': row["service"],
'state': row["state"],
'attack_cat': row["attack_cat"],
'label': row["label"]
}
})
row_list_as_dict

Apache Druid segment merge task submition failure

I am using Druid 0.9.1.1 and trying to merge all the segment of a datasource per day to a single segment. Whereas the merge task initiation fails with error :
{"error":"Instantiation of [simple type, class io.druid.timeline.DataSegment] value failed: null (through reference chain: java.util.ArrayList[0])"}
I have got the segment details from segment metadata query. There is no help from driud documents as only specify raw structure of the overall query, but not the required segment detail structure(Below is how druid document suggests).
{
"type": "merge",
"id": <task_id>,
"dataSource": <task_datasource>,
"aggregations": <list of aggregators>,
"segments": <JSON list of DataSegment objects to merge>
}
example queries :
{
"type": "merge",
"id": "envoy_merge_task",
"dataSource": "dcap.envoy.diskmounts.kafka",
"segments": [{"id":"dcap.sermon.threshold.kafka_2017-05-22T00:00:00.000Z_2017-05-23T00:00:00.000Z_2017-05-22T07:00:02.951Z","intervals":["2017-05-22T00:00:00.000Z/2017-05-23T00:00:00.000Z"],"columns":{},"size":5460959,"numRows":41577,"aggregators":null,"queryGranularity":null},{"id":"dcap.sermon.threshold.kafka_2017-05-22T00:00:00.000Z_2017-05-23T00:00:00.000Z_2017-05-22T07:00:02.951Z_1","intervals":["2017-05-22T00:00:00.000Z/2017-05-23T00:00:00.000Z"],"columns":{},"size":5448881,"numRows":41577,"aggregators":null,"queryGranularity":null},{"id":"dcap.sermon.threshold.kafka_2017-05-22T00:00:00.000Z_2017-05-23T00:00:00.000Z_2017-05-22T07:00:02.951Z_2","intervals":["2017-05-22T00:00:00.000Z/2017-05-23T00:00:00.000Z"],"columns":{},"size":5454452,"numRows":41571,"aggregators":null,"queryGranularity":null},{"id":"dcap.sermon.threshold.kafka_2017-05-22T00:00:00.000Z_2017-05-23T00:00:00.000Z_2017-05-22T07:00:02.951Z_3","intervals":["2017-05-22T00:00:00.000Z/2017-05-23T00:00:00.000Z"],"columns":{},"size":5456267,"numRows":41569,"aggregators":null,"queryGranularity":null}] }
I have tried different forms of structure for "segments" key, results in same error.
example :
"segments": [{"id":"dcap.envoy.diskmounts.kafka_2017-05-21T06:00:00.000Z_2017-05-21T07:00:00.000Z_2017-05-21T06:02:43.482Z"},{"id":"dcap.envoy.diskmounts.kafka_2017-05-21T06:00:00.000Z_2017-05-21T07:00:00.000Z_2017-05-21T06:02:43.482Z_1"},{"id":"dcap.envoy.diskmounts.kafka_2017-05-21T06:00:00.000Z_2017-05-21T07:00:00.000Z_2017-05-21T06:02:43.482Z_2"},{"id":"dcap.envoy.diskmounts.kafka_2017-05-21T06:00:00.000Z_2017-05-21T07:00:00.000Z_2017-05-21T06:02:43.482Z_3"}]
What is right structure for segment-merge tasks.
the format i used for segments is
"segments":[
{
"dataSource": "wikiticker88",
"interval": "2015-09-12T02:00:00.000Z/2015-09-12T03:00:00.000Z",
"version": "2018-01-16T07:23:16.425Z",
"loadSpec": {
"type": "local",
"path": "/home/linux/druid-0.11.0/var/druid/segments/wikiticker88/2015-09-12T02:00:00.000Z_2015-09-12T03:00:00.000Z/2018-01-16T07:23:16.425Z/0/index.zip"
},
"dimensions": "channel,cityName,comment,countryIsoCode,countryName,isAnonymous,isMinor,isNew,isRobot,isUnpatrolled,metroCode,namespace,page,regionIsoCode,regionName,user",
"metrics": "count,added,deleted,delta,user_unique",
"shardSpec": {
"type": "none"
},
"binaryVersion": 9,
"size": 198267,
"identifier": "wikiticker88_2015-09-12T02:00:00.000Z_2015-09-12T03:00:00.000Z_2018-01-16T07:23:16.425Z"
},
]
use this to get your metadata of segments
/druid/coordinator/v1/metadata/datasources/{dataSourceName}/segments?full

Media Source Extensions appendBuffer of WebM stream in random order

I am trying to achieve video downloading in parallel from multiple sources. However MSE appendBuffer method always fails when not following sequence order of video file.
I would like to append parts in random order and play video "as soon as possible".
I was exploring SourceBuffer mode property as well as timestampOffset. None of those were helpful.
I am wondering if source webm file i have could be in "not supported format" for such a task (sequential approach works fine).
source video file
Thank you for any advices.
UPDATE:
I tried to analyse well known example video file and i figured out that it is possible to append parts of it out of order. Seems like it is necessary to follow Cluster byte ranges:
<Cluster type="list" offset="4357">
<Timecode type="uint" value="0"/>
<SimpleBlock type="binary" size="7723" trackNum="1" timecode="0" presentationTimecode="0" flags="80"/>
<SimpleBlock type="binary" size="5" trackNum="2" timecode="0" presentationTimecode="0" flags="80"/>
...
</Cluster>
<Cluster type="list" offset="16187">
<Timecode type="uint" value="385"/>
<SimpleBlock type="binary" size="5" trackNum="2" timecode="0" presentationTimecode="385" flags="80"/>
<SimpleBlock type="binary" size="4968" trackNum="1" timecode="13" presentationTimecode="398" flags="80"/>
...
</Cluster>
After digging into webm format specification, compiling libwebm tools and studying DASH i finally figured out how to make MSE appendBuffer working in any order!
ffmpeg -i result.webm -g 10 -c:v libvpx resultClusters.webm (you can also use libvpx-vp9)
mkvmuxer_sample -i resultClusters.webm -o resultRepaired.webm
mse_json_manifest resultRepaired.webm >> manifest.json
You will get on stdout something like:
{
"type": "video/webm; codecs=\"vp8\"",
"duration": 27771.000000,
"init": { "offset": 0, "size": 258},
"media": [
{ "offset": 258, "size": 54761, "timecode": 0.000000 },
{ "offset": 55019, "size": 166431, "timecode": 2.048000 },
{ "offset": 221450, "size": 49258, "timecode": 4.130000 },
{ "offset": 270708, "size": 29677, "timecode": 6.148000 },
{ "offset": 300385, "size": 219929, "timecode": 8.232000 },
{ "offset": 520314, "size": 25132, "timecode": 10.335000 },
{ "offset": 545446, "size": 180777, "timecode": 12.440000 },
{ "offset": 726223, "size": 76107, "timecode": 14.471000 },
{ "offset": 802330, "size": 376557, "timecode": 14.794000 },
{ "offset": 1178887, "size": 247138, "timecode": 16.877000 },
{ "offset": 1426025, "size": 78468, "timecode": 18.915000 },
{ "offset": 1504493, "size": 25614, "timecode": 20.991000 },
{ "offset": 1530107, "size": 368277, "timecode": 23.093000 },
{ "offset": 1898384, "size": 382847, "timecode": 25.097000 },
{ "offset": 2281231, "size": 10808, "timecode": 27.135000 }
]
}
Now all you have to do is firstly load metadata xhr.setRequestHeader("Range", "bytes=0-257"); and then in ANY ORDER all other segments. E.g. second segment range is 55019-221449 bytes.
Explanation:
The most important thing is ffmpeg reencoding with group of frames set to the size of cluster you would like to have. In this example i choose pretty low threshold (each 10 frames) but you can choose higher causing fewer clusters are generated (less items in "media" array).
After that you have to fix cues in the classic way (using sample_muxer from libwebm) and you are ready to go.
Tested on: Chrome 51, Firefox 47.

Yodlee getSiteLoginForm API response changes between attempts

There seems to be an inconsistency with the responses for Yodlee's getSiteLoginForm REST API function.
For a site that has a login field with radio buttons, sometimes the data coming back from Yodlee for that particular field will look like this:
{
"fieldInfoList": [
{
"validValues": [
"1",
"2",
"3",
"4"
],
"displayValidValues": [
"1",
"2",
"3",
"4"
],
"valueIdentifier": "OPTIONS",
"valueMask": "LOGIN_FIELD",
"fieldType": {
"typeName": "OPTIONS"
},
"size": 20,
"maxlength": 40,
"name": "OPTIONS",
"displayName": "Issue Number",
"isEditable": true,
"isOptional": false,
"isEscaped": false,
"helpText": "76367",
"isOptionalMFA": false,
"isMFA": false
}
]
}
and other times it looks like this:
{
"validValues": [
"1",
"2",
"3",
"4"
],
"displayValidValues": [
"1",
"2",
"3",
"4"
],
"valueIdentifier": "OPTION",
"valueMask": "LOGIN_FIELD",
"fieldType": {
"typeName": "OPTIONS"
},
"size": 20,
"maxlength": 40,
"name": "OPTION",
"displayName": "Issue Number",
"isEditable": true,
"isOptional": false,
"isEscaped": false,
"helpText": "76367",
"isOptionalMFA": false,
"isMFA": false
}
It's the same field but the valueIdentifier value has changed and the data isn't being enclosed in a fieldInfoList variable.
What would be the reason for this response data-set changing between two attempts if there's no difference in the code?
In addition to that, could a similar response inconsistency be affecting other API functions from Yodlee, and if so how does one deal with this uncertain variance?
We did analysis and Yodlee provides every time same response, no matter how many attempts you'll do. While I am assuming that you might be confused between getSiteLoginForm and getLoginFomForContentService, as both are two different APIs and belongs to approach i.e., Site Based and Container Based respectively. And the response you have mentioned first comes when you use getSiteLoginForm while the later one comes with getLoginFormForContentService.
Hope this helps as there is no issue with the API, these are two different response from 2 different APIs.

BigCommerce API - What is correct resource for updating an option value

I'm trying to update an option value using the BigCommerce api.
The documentation says PUT /options/values/id.json
The console says PUT options/id/values.json
I think it should be PUT options/id/values/id.json, which returns a 200 response code, but does not execute the update.
Any information on what the right endpoint is for this and if it works?
Basically, if you do a GET request on options
{
"id": 3,
"name": "Colors",
"display_name": "Color",
"type": "CS",
"values": {
"url": "https://store-xxx.mybigcommerce.com/api/v2/options/3/values.json",
"resource": "/options/3/values"
}
}
The resource endpoint shows that the URL is options/id/values.json. But, this gives you all the values associated with the option. If you want to retrieve a specific option the endpoint is something similar to /api/v2/options/3/values/7.json
{
"id": 7,
"option_id": 3,
"label": "Silver",
"sort_order": 1,
"value": "#cccccc"
}
Doing a PUT request on this - (On REST console, setting the header content-type to application/json and sending raw JSON data) updates the label - Changed Silver to silver)
{
"id": 7,
"option_id": 3,
"label": "silver",
"sort_order": 1,
"value": "#cccccc"
}