bitcoin sendmany api will add a change address if there is no combination of utxos that match the amount+fee.
How do i set the change address?
What is the default for change address?
Docs for reference:
sendmany "" {"address":amount} ( minconf "comment" ["address",...] replaceable conf_target "estimate_mode" )
Send multiple times. Amounts are double-precision floating point numbers.
Arguments:
1. dummy (string, required) Must be set to "" for backwards compatibility.
2. amounts (json object, required) A json object with addresses and amounts
{
"address": amount, (numeric or string, required) The bitcoin address is the key, the numeric amount (can be string) in BTC is the value
}
3. minconf (numeric, optional, default=1) Only use the balance confirmed at least this many times.
4. comment (string, optional) A comment
5. subtractfeefrom (json array, optional) A json array with addresses.
The fee will be equally deducted from the amount of each selected address.
Those recipients will receive less bitcoins than you enter in their corresponding amount field.
If no addresses are specified here, the sender pays the fee.
[
"address", (string) Subtract fee from this address
...
]
6. replaceable (boolean, optional, default=fallback to wallet's default) Allow this transaction to be replaced by a transaction with higher fees via BIP 125
7. conf_target (numeric, optional, default=fallback to wallet's default) Confirmation target (in blocks)
8. estimate_mode (string, optional, default=UNSET) The fee estimate mode, must be one of:
"UNSET"
"ECONOMICAL"
"CONSERVATIVE"
Result:
"txid" (string) The transaction id for the send. Only 1 transaction is created regardless of
the number of addresses.
Technically you always pay the fee, default bitcoin core commands generate a new address automatically on each transaction. If you want to specify a change adress you should use createRawTransaction instead.
Related
For example, looking at RFC 7301, which defines ALPN:
enum {
application_layer_protocol_negotiation(16), (65535)
} ExtensionType;
The (16) is the enum value to be used, but how should I read the (65535) part?
From the same document:
opaque ProtocolName<1..2^8-1>;
struct {
ProtocolName protocol_name_list<2..2^16-1>
} ProtocolNameList;
...how should I read the <1..2^8-1> and <2..2^16-1> parts?
The notation is described in https://www.rfc-editor.org/rfc/rfc8446.
For "enumerateds" (enums), see https://www.rfc-editor.org/rfc/rfc8446#section-3.5, which says that the value in brackets is the value of that enum member, and that the enum occupies as many octets as required by the highest documented value.
Thus, if you want to leave some room, you need an un-named enum member with a sufficiently high value.
One may optionally specify a value without its associated tag to force the width definition without defining a superfluous element.
In the following example, Taste will consume two bytes in the data stream but can only assume the values 1, 2, or 4.
enum { sweet(1), sour(2), bitter(4), (32000) } Taste;
For vectors, see https://www.rfc-editor.org/rfc/rfc8446#section-3.4. This says:
Variable-length vectors are defined by specifying a subrange of legal lengths, inclusively, using the notation <floor..ceiling>. When these are encoded, the actual length precedes the vector's contents in the byte stream. The length will be in the form of a number consuming as many bytes as required to hold the vector's specified maximum (ceiling) length.
So the notation <1..2^8-1> means that ProtocolName must be at least one octet, and up to 255 octets in length.
Similarly <2..2^16-1> means that protocol_name_list must have at least 2 octets (not entries), and can have up to 65535 octets (not entries).
In this particular case, the minimum of 2 octets is because it must contain at least one entry, which is itself at least 2 octets long (u8 length prefix, at least one octet in the value).
To make the octets/entries distinction clear, later in that section, it says:
uint16 longer<0..800>;
/* zero to 400 16-bit unsigned integers */
In my js file I call a send transaction to the smart contract, so what is the difference between values :
instance.multiply.sendTransaction(val,{ from: accounts[0], gas : 300000} and instance.multiply.sendTransaction({ from: accounts[0], gas : 30000, value : val},
I am passing first one to the function as an argument and the second is accessible in the function just by msg.value ?
In your first code snippet, you're passing val as an argument to a function.
In the second code snippet, you're not passing any arguments, but you're sending val wei in the transaction. Yes, the contract, can see how much wei was sent by looking at msg.value, but importantly there was also a transfer of ether. (10**18 wei == 1 ether.)
So the key differences between the two are:
One passes a value as an argument, and the other doesn't.
One sends some ether with the transaction, and the other doesn't.
Proper Syntax for web3.eth.sendTransaction
web3.eth.sendTransaction(transactionObject [, callback])
Second one works fine instance.multiply.sendTransaction({ from: accounts[0], gas : 30000, value : val}, and should.
The format of sendTransaction is sendTransaction({from: eth.accounts[0], data: code, gas: 100000}).
from: String - The address for the sending account. Uses the
web3.eth.defaultAccount property, if not specified.
to: String - (optional) The destination address of the message,
left undefined for a contract-creation transaction.
value: Number|String|BigNumber - (optional) The value transferred
for the transaction in Wei, also the endowment if it's a
contract-creation transaction.
gas: Number|String|BigNumber - (optional, default:
To-Be-Determined) The amount of gas to use for the transaction
(unused gas is refunded).
data: String - (optional) Either a byte string containing the
associated data of the message, or in the case of a
contract-creation transaction, the initialisation code.
For more See: https://github.com/ethereum/wiki/wiki/JavaScript-API#web3ethsendtransaction
I'm trying to implement the CiA 401(I/O). But I don't know how the device should behave if the object 6002 (input polarity) changes.
Should the value in object 6000 (read input) also change and if so, a PDO should also be sent, although nothing has changed at the physical input?
The only mandatory input polarity objects are 6002:0 and 6002:1, and it should affect the polarity of the corresponding digital on/off objects objects mapped at 6000. Note that DS-401 lists an "Entry Category" which dictates which objects and indices that are mandatory and which that are optional.
If you map the input polarity, it will be a RPDO in your application, and affect whatever TPDO that 6002 is mapped into. As far as I remember, the values inside 6000 should not change, only the values of the relevant TPDO. This TPDO will only be sent when it should - that is, depending on how it is configured: cyclic, on change, on request etc.
We have code to interrogate the values from various EMV TLVs.
However, in the case of PED serial number, the spec for tag "9F1E" at
http://www.emvlab.org/emvtags/
has:-
Name Description Source Format Template Tag Length P/C Interface
Device (IFD) Serial Number Unique and permanent serial number assigned
to the IFD by the manufacturer Terminal an 8 9F1E 8 primitive
But the above gives a limit of 8, while we have VeriFone PEDs with 9-long SNs.
So sample code relying on tag "9F1E" cannot retrieve the full length.
int GetPPSerialNumber()
{
int rc = -1;
rc = GetTLV("9F1E", &resultCharArray);
return rc;
}
In the above, GetTLV() is written to take a tag arg and populate the value to a char array.
Have any developers found a nice way to retrieve the full 9?
You're correct -- there is a mis-match here. The good thing about TLV is that you don't really need a specification to tell you how long the value is going to be. Your GetTLV() is imposing this restriction itself; the obvious solution is to relax this.
We actually don't even look at the documented lengths on the TLV-parsing level. Each tag is mapped to an associated entity in the BL (sometimes more than one thanks to the schemes going their own routes for contactless), and we get to choose which entities we want to impose a length restriction on there.
We have made an implementation to ISO8583. Every transaction works except if we make a keyed transaction for AMEX cards,
Document states :
The Track 2 Data and Primary Account Number (PAN) fields are instances of numeric data elements that follow a different format: In the case where the variable length data has an odd number of digits, set the right-most half byte to X '0'.
but adding a 0 to AMEX Card number we get a INVALID CARD NUMBER response,
If we send 15 digit card number no response at all is received.
Also at some other place in document it is mentioned :
Bitmap 2 — Primary Account Number Field Name Description
Variable up to 19 digits (if needed, last ½ byte padded-binary zero), preceded by 1-byte Length Indicator.
Comments
This field identifies the card member‘s account number. Unlike most numeric fields, the Primary Account Number is left-justified. In this case, the rightmost byte is padded with a ½ byte binary zero (e.g., a three-position field, X ‘03 12 30‘).
Is there any thing special that we need to do for odd digit card numbers?