What is the difference between debot fetch and debot start? - solidity

I have few different options in the command line: fetch and start. What is the difference between them?

Here https://github.com/tonlabs/tonos-cli/pull/162 Free TON guys says that
"added debot start command as synonym for fetch command."
BUT here https://github.com/tonlabs/TON-SDK/blob/master/docs/mod_debot.md#start thay say
start - Starts the DeBot.
Downloads debot smart contract from blockchain and switches it to context zero.
This function must be used by Debot Browser to start a dialog with debot. While the function is executing, several Browser Callbacks can be called, since the debot tries to display all actions from the context 0 to the user.
When the debot starts SDK registers BrowserCallbacks AppObject. Therefore when debote.remove is called the debot is being deleted and the callback is called with finish=true which indicates that it will never be used again.
type ParamsOfStart = {
debot_handle: DebotHandle
}
function start(
params: ParamsOfStart,
): Promise<void>;
and below https://github.com/tonlabs/TON-SDK/blob/master/docs/mod_debot.md#fetch
fetch - Fetches DeBot metadata from blockchain.
Downloads DeBot from blockchain and creates and fetches its metadata.
type ParamsOfFetch = {
address: string
}
type ResultOfFetch = {
info: DebotInfo
}
function fetch(
params: ParamsOfFetch,
): Promise<ResultOfFetch>;
here in tg chat https://t.me/freeton_dev_exp You can find SDK developers

Related

Issues during implementation of custom ETH settler

I have investigated and played with corda-settler project. Following the recommendations within the documentation, I have created a custom ethereum module (with an outline similar to the ripple module), providing the option to settle obligations using off-ledger payments in ETH. The implementation (https://github.com/vladichhh/corda-settler)
consists of the following significant pieces:
flows
MakeEthPayment
services
ETHClient
ETHService
types
EthPayment
EthSettlement
token
registered DigitalCurrency for ETH
oracle
added logic for ETH payment verification
MakeEthPayment.kt
#Suspendable
override fun makePayment(obligation: Obligation<*>, amount: Amount<T>): EthPayment<T> {
// get ETHService client
val ethClient = serviceHub.cordaService(ETHService::class.java).client
val recipient = obligation.settlementMethod?.accountToPay.toString()
val amountToSend = amount.quantity.toString()
// trigger ETH transfer
val txHash = ethClient.sendEth(recipient, amountToSend)
// return the payment
return EthPayment(txHash, amount, PaymentStatus.SENT)
}
ETHClient.kt
fun sendEth(recipient: String, amount: String): String {
val weiAmount: BigInteger = Convert.toWei(amount, Convert.Unit.GWEI).toBigInteger()
val credentials: Credentials = WalletUtils.loadCredentials(walletPassword, walletFile)
val transactionReceipt: TransactionReceipt = Transfer
.sendFunds(web3j, credentials, recipient, BigDecimal(weiAmount), Convert.Unit.WEI)
.send()
return transactionReceipt.transactionHash
}
In order to send the required ETH amount to the specified recipient account, we have to do some Ethereum specific stuff:
we are connecting to Ethereum public blockchain environment, using “web3j” library
in order to trigger am Etherem transaction and transfer specified ETH amount, "web3j" requires an access to the file, containing encrypted sender's wallet
thus we have to provide password (to decrypt wallet) and location of the file, containing encrypted sender's wallet
And here are the issues:
I got the exception that the file could not be found, no matter where I am putting it. I have checked even the “swift” implementation and tried to use the class loader to load my file, but without success.
I suppose, the file with encrypted sender’s wallet should be located on one of the following places:
corda-settler/ethereum/src/main/resources/file.tmp
corda-settler/cordapp/src/main/resources/file.tmp
Finally I have hardcoded the location in that way:
/Users/vladimirhristov/WebstormProjects/Corda/corda-settler/cordapp/src/main/resources/file.tmp
and seems that the file was found but got another exception:
java.lang.OutOfMemoryError - screenshot
Seems that the operation of wallet decryption is highly consuming, which breaks maybe the flow. There is an option to reduce the algorithm complexity of the wallet generation, which will reflect in lower resources required to decrypt the same wallet at the next step, but this will reduce the security as well.
And here are my three basic questions ...
How could I specify (location/mechanism) and make flow to find successfully my file, containing the sender’s encrypted wallet ?
How could I access a files in the flow, or if there is another mechanism to attach only the file with encrypted wallet and pass the decryption to core Corda ?
Do I need just to increase node resources (tuning JVM params increasing -Xms/-Xmx) in order to avoid OutOfMemoryError ?
Content of the file (containing encrypted sender’s wallet):
file.tmp
{"version":3,"id":"ecb51768-8564-498a-bb11-3a5a5c8dc0bb","address":"2bafc482bd227dfd5ba250521a00be3a4cc88bbd","crypto":{"ciphertext":"e0511415792dfa7221ba1b8f32b8ec98e1410f45e612e2100df1aceddfdb22bd","cipherparams":{"iv":"7ffa2af08f502c63d57e62440ad77539"},"cipher":"aes-128-ctr","kdf":"scrypt","kdfparams":{"dklen":32,"salt":"8051a5df1c02eb3eba81d2920fbb84b76b948a1248bbba62ffff684e733948cf","n":131072,"r":8,"p":1},"mac":"be23fe0e261ba38892581d80afd0c86563748377b5cc702b6ed3285a13cceff6"}}
I will appreciate any help! Thanks in advance :)
VERY strange that Corda is giving you an out of memory error when running that flow.
I'd actually say that we'd need to be able to see the code for the flow in order to know how it could have run out of memory.
Are you running it in a container? Just make sure that you're meeting the requirements to run a JVM with an application on top.
tl;dr use a 8GB RAM machine to run your Corda node on the latest version of corda that should hopefully solve this issue.
Here's the docs page on the memory requirements;
https://docs.corda.net/docs/corda-enterprise/4.5/node/performance-results.html#sizing

How to test contract with multiple accounts / addresses in truffle?

I want to test my truffle contract with multiple msg.sender addresses. Like "the first user sell token, the second user buys this token". For one address I simply write something like contract.buy.value(10 wei)();. But where I could get another address and how to send money from him?
I write my tests on solidity, not on javascript.
as you can see in the Truffle docs, you can specify two different accounts to interact with your deployed smart contract like below (Metacoin example):
var account_one = "0x1234..."; // an address
var account_two = "0xabcd..."; // another address
var meta;
MetaCoin.deployed().then(function(instance) {
meta = instance;
return meta.sendCoin(account_two, 10, {from: account_one});
}).then(function(result) {
// If this callback is called, the transaction was successfully processed.
alert("Transaction successful!")
}).catch(function(e) {
// There was an error! Handle it.
})
This is about how you can do with your own created token.
If you want to transfer Ether between accounts, you can specify accounts in your truffle execution file (a javascript file). And these accounts may come from your configured local blockchain (Ganache, if you are using Truffle Suite to test your smart contract, it will provide you with several accounts and you can configure these by yourself).
Moreover, you may need javascript API to specify the sender and receiver: web3.eth.sendTransaction.
First time to answer a question, hope this will help.

How to call a chaincode from another chaincode deployed on two different organization peers connected using a single channel in hyperledger-fabric?

I wrote a chaincode1(deployed on one peer of ORG1) which accepts the details from the user and store it into the ledger. Now I want to write a chaincode2(deployed on a peer of ORG2) which takes some data from the chaincode1 for computation. This chaincode2 should be called by chaincode1 with specific details required for computation.
How can i achieve this thing and where should I test it?
To start with, there are few preconditions such as:
If you'd like to have chaincode1 to invoke chaincode2, you need both chaincodes to be installed the same peer and
Need to make sure this peer part of both channels
Next, you need to leverage following API:
// InvokeChaincode locally calls the specified chaincode `Invoke` using the
// same transaction context; that is, chaincode calling chaincode doesn't
// create a new transaction message.
// If the called chaincode is on the same channel, it simply adds the called
// chaincode read set and write set to the calling transaction.
// If the called chaincode is on a different channel,
// only the Response is returned to the calling chaincode; any PutState calls
// from the called chaincode will not have any effect on the ledger; that is,
// the called chaincode on a different channel will not have its read set
// and write set applied to the transaction. Only the calling chaincode's
// read set and write set will be applied to the transaction. Effectively
// the called chaincode on a different channel is a `Query`, which does not
// participate in state validation checks in subsequent commit phase.
// If `channel` is empty, the caller's channel is assumed.
InvokeChaincode(chaincodeName string, args [][]byte, channel string) pb.Response
Here is an example:
chainCodeArgs := util.ToChaincodeArgs("arg1", "arg2")
response := stub.InvokeChaincode("chaincodeName", chainCodeArgs, "channelID")
if response.Status != shim.OK {
return shim.Error(response.Message)
}
Here is the function which can be used to invoke a chaincode from another chaincode
func (stub *TestAPIStub) InvokeChaincode(chaincode1 string, args [][]byte, channel string) pb.Response {
return pb.Response{}
}
You can refer this document to understand how a smart contract calls or "chaincode" calls another smart contract.

How to fetch fieldname and structure for additional properties in operation, managedObject etc

I am trying to figure out which fragments are related to operation:
managedObject
event
measurement
alarm
So , Is there a way to get all these fragments ?
Also there are additional Properties for which field name is defined as * and the value can be an Object or anything else(*). I have gone through the device management library and sensor library in cumulocity documentation but found it does not contain all the possible fragments and there is no clarity as in which object the fragment goes i.e does it go in operation or managedObject, or both?
Since every user, device and application can contribute such fragments, there is no "global list" of them that you could refer to. Normally, a client (application, device) knows what data it sends or what data it requests, so it's also in most cases not required.
Regarding the relationship between operations and managed objects, there are some typical design pattern. Let's say you want to configure something in a device, like a polling interval:
"mydevice_Configuration": { "pollingRate": 60 }
What your application would do is to send that fragment as an operation to a device:
POST /devicecontrol/operations HTTP/1.1
...
{
"deviceId": "12345",
"mydevice_Configuration": { "pollingRate": 60 }
}
The device would accept the operation (http://cumulocity.com/guides/rest/device-integration/#step-6-finish-operations-and-subscribe) and change its configuration. When it does that successfully, it will update its managed object to contain the new configuration:
PUT /inventory/managedObjects/12345 HTTP/1.1
{
"mydevice_Configuration": { "pollingRate": 60 }
}
This way, your inventory reflects as closely as possible the true state of devices.
Hope that helps ...

Servicestack.Redis Pub/Sub limitations with other nested Redis commands

I am having a great experience with ServiceStack & Redis, but I'm confused by ThreadPool and Pub/Sub within a thread, and an apparent limitation for accessing Redis within a message callback. The actual error I get states that I can only call "Subscribe" or "Publish" within the "current context". This happens when I try to do another Redis action from the message callback.
I have a process that must run continuously. In my case I can't just service a request one time, but must keep a thread alive all the time doing calculations (and controlling these threads from a REST API route is ideal). Data must come in to the process on a regular basis, and data must be published. The process must also store and retrieve data from Redis. I am using routes and services to take data in and store it in Redis, so this must take place async from the "calculation" process. I thought pub/sub would be the answer to glue the pieces together, but so far that does not seem possible.
Here is how my code is currently structured (the code with the above error). This is the callback for the route that starts the long term "calculation" thread:
public object Get(SystemCmd request)
{
object ctx = new object();
TradingSystemCmd SystemCmd = new TradingSystemCmd(request, ctx);
ThreadPool.QueueUserWorkItem(x =>
{
SystemCmd.signalEngine();
});
return (retVal); // retVal defined elsewhere
}
Here is the SystemCmd.signalEngine():
public void signalEngine(){
using (var subscription = Redis.CreateSubscription())
{
subscription.OnSubscribe = channel =>
{
};
subscription.OnUnSubscribe = channel =>
{
};
subscription.OnMessage = (channel, msg) =>
{
TC_CalcBar(channel, redisTrade);
};
subscription.SubscribeToChannels(dmx_key); //blocking
}
}
The "TC_CalcBar" call does processing on data as it becomes available. Within this call is a call to Redis for a regular database accesses (and the error). What I could do would be to remove the Subscription and use another method to block on data being available in Redis. But the current approach seemed quite nice until it failed to work. :-)
I also don't know if the ThreadPool has anything to do with the error, or not.
As per Redis documentation:
Once the client enters the subscribed state it is not supposed to
issue any other commands, except for additional SUBSCRIBE, PSUBSCRIBE,
UNSUBSCRIBE and PUNSUBSCRIBE commands.
Source : http://redis.io/commands/subscribe