Getting 66: insufficient priority. when trying to push transaction - bitcoin

Basically I getting 66: insufficient priority. Code:-26 when I trying to push transaction
I already tried to increase fees manually.
AMOUNT_TOSEND is in satoshis
I take total_received value from https://api.blockcypher.com/v1/btc/test3/addrs/ + address + '/full?limit=50'
and multiply it by 100000000
I take fee from here: https://test-insight.bitpay.com/api/utils/estimatefee
and multiply it by 100000000
Here is how I calculate outputs:
tx.addOutput(ADDRESS_BENEFICIARY, Number(AMOUNT_TOSEND))
tx.addOutput(ADDRESS_BENEFICIARY, Number(balance) - (Number(AMOUNT_TOSEND) + Number(fee)))
Pasting on https://test-insight.bitpay.com/tx/send and getting 66: insufficient priority. Code:-26

Was able to fix after I started to use transaction amount instead of the total balance, also multiply fee by transaction size in bytes, but not sure I exactly need it

Related

solana spl-token transfer fee "Error: Program(IncorrectProgramId)"

I want to create my own Solana token that takes %2 fee to all transactions and total supply should be 100k token. That's why i used spl-token cli for this spl-token create-token --transfer-fee 50 1000, however after executing this command i get an error like
Error: Program(IncorrectProgramId)
How can i fix this error or how can i create my own token with transaction fee.
When creating a new token with transfer fees, you must specify the program id as the token-2022 program id of TokenzQdBNbLqP5VEhdkAS6EPFLC1PHnBqCXEpPxuEb, so instead, do:
spl-token --program-id TokenzQdBNbLqP5VEhdkAS6EPFLC1PHnBqCXEpPxuEb create-token --transfer-fee 50 1000

how to get the average response time for a request using Jmeter testing tool

I am trying Jmeter for load and performance test, so I created the thread group and below is the output of aggregate report.
The first column Avg request t/s, i have calculated using the formula
((Average/Total Requests)/1000)
but it does not seems good, as I am loggin request time in my code, almost every request is taking minimum 2-4 seconds.
I tried with MEdian/1000 but again, i am in doubt.
what is the corerct way to get the average time for a request?
Avg request t/s
Total Requests
Average
MEdian
Min
Max
Error %
ThroughPut request per time unit
Recieved KB
Sent KB
0.07454
100
7454
6663
2464
19313
0
3/sec
2.062251152
1.074506499
1.11322
100
111322
107240
4400
222042
0
26.3/min
0.1408915377
0.1271878015
1.19035
100
119035
117718
0.03
26.3/min
0.1309013211
0.1279624502
1.21287
100
121287
119198
0
0.4136384882
0.135725129
0.1211831508
1.11943
100
111943
111582
5257
220004
0
0.4359482965
0.1507086884
0.1264420352
1.14289
100
114289
114215
4543
223947
0
0.4369846313
0.1497867242
0.1288763268
0.23614
150
35421
26731
4759
114162
0
0.9494271789
0.3600282257
0.1842358496
Don't you have the ThroughPut request per time unit column already? What else do you need to "calculate"?
As per Aggregate Report documentation
Throughput - the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.
So if you click Save Table Data button:
you will get the average transactions per second in the CSV file
It is also possible to generate the CSV file with the calculated aggregate values using JMeter Plugins Command Line Tool as
JMeterPluginsCMD.bat --generate-csv aggregate-report.csv --input-jtl /path/to/your/results.jtl --plugin-type AggregateReport

What does the ‘ovs-dpctl show’ command means?

When I execute the 'ovs-dpctl show' command, I got:
$ ovs-dpctl show
system#ovs-system:
lookups: hit:37994604 missed:218759 lost:0
flows: 5
masks: hit:39862430 total:5 hit/pkt:1.04
port 0: ovs-system (internal)
port 1: vbr0 (internal)
port 2: gre_sys (gre)
port 3: net2
I retrieved some explanations:
[-s | --statistics] show [dp...]
Prints a summary of configured datapaths, including their data‐
path numbers and a list of ports connected to each datapath.
(The local port is identified as port 0.) If -s or --statistics
is specified, then packet and byte counters are also printed for
each port.
The datapath numbers consists of flow stats and mega flow mask
stats.
The "lookups" row displays three stats related to flow lookup
triggered by processing incoming packets in the datapath. "hit"
displays number of packets matches existing flows. "missed" dis‐
plays the number of packets not matching any existing flow and
require user space processing. "lost" displays number of pack‐
ets destined for user space process but subsequently dropped be‐
fore reaching userspace. The sum of "hit" and "miss" equals to
the total number of packets datapath processed.
The "flows" row displays the number of flows in datapath.
The "masks" row displays the mega flow mask stats. This row is
omitted for datapath not implementing mega flow. "hit" displays
the total number of masks visited for matching incoming packets.
"total" displays number of masks in the datapath. "hit/pkt" dis‐
plays the average number of masks visited per packet; the ratio
between "hit" and total number of packets processed by the data‐
path.
If one or more datapaths are specified, information on only
those datapaths are displayed. Otherwise, ovs-dpctl displays
information about all configured datapaths.
my question is:
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
Yes. (Plus lookups.lost except that I see that's zero for you.)
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
masks.hit is the number of hash table lookups that were executed to
process all of the packets that were processed. A given packet might
require up to masks.total lookups.
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
The ratio cannot be less than 1.00 because that would mean that
processing a packet didn't require even a single lookup. A ratio of
1.04 is very good because it means that most packets were processed with
only a single lookup. Higher ratios are worse.
by Ben Pfaff (blp#ovn.org)

sagepay / authenticate and authorise / 115% calculation rules

I've just had an issue where an authenticated payment failed to authorise with the following error:
StatusDetail=4044 : This Authorise would exceed maximum allowed value.
The original amount was 119.46. If you multiply that by 1.15 to add on 15% you get 137.379 which I would expect to round up to 137.38. I can only authorise the payment if I authorise the rounded down value of 137.37.
Could someone please confirm the calculation method used when determining what is available to authorise?
I suspect that as the Authorise is up to 15% over the transaction value, it must not exceed 15% even by a fraction. So round under rather than over.
119.46 x 0.15 = 17.919 + 119.46 = 137.37.9 The Amount to Authorise will be 137.37 if you are being presented with 4044 for the amount of 137.88.

eWAM - In Wynsure - Invalid time format error in aWFOperationAssignment object

When processes like GBP Subscription/Member Enrollment/Member Endorsement are performed and when these processes are accepted, the system throws an error as:
“Object of the class type aWFOperationAssignment cannot be stored in
the database with the corresponding NSID, ID & Version”
and the transaction is roll-backed with the below error shown in the error report.
“The transaction is roll backed. Err Code= 22007.
ErrMsg=SQLState=22007 . [Microsoft][SQL Server Native Client
10.0]Invalid time format”.
This happens only in few of the environments. Not sure if this is a code or configuration issue.
This issue is caused because of the “Bank Holidays Context” configuration in Wynsure.
In the Bank Holidays (Business Administration -> General Settings -> Bank Holiday), the End Time is supposed to be configured in 24 hr time format. If this is configured as for example: 8 for start and 5 for end time, instead of 8 for start and 17 for end time, then the duration is calculated incorrectly. Note that Wynsure tries to subtract the start time from the end time (in this case, it tries to subtract 8 from 5 and gives an incorrect duration)
This configuration will cause an issue while processing any transactions because at the completion of any transaction a corresponding operation is created with 2 fields viz., “Expected Limit Date” and “Expected Limit Time” and this field uses the difference between the “End Time” and “Start Time” to calculate the expected date and time limit.
As the difference between the End Time & Start Time will return an incorrect value, an invalid date & time will be calculated and the system will throw an error with the invalid date and time format and the transaction is rolled back.
To fix this issue, the “End Time” should be configured in 24 hr time format.