How do I make free read-only calls to a smart contract on Hedera blockchain network without incurring charges? - smartcontracts

The problem is that I am trying to make free read-only calls to a smart contract on the Hedera network, but am encountering unexpected results. I have tried various methods, but am unable to successfully make the calls without incurring charges. I am looking for a solution or guidance on how to properly make these free read-only calls to the smart contract on Hedera.
//Create the transaction
const transaction = new ContractExecuteTransaction()
.setContractId(newContractId)
.setFunction("get_message")
I expected this get_message to not charge me HBAR since that function just returns a hardcoded string but I cant execute it for free like I want to. How do I do this?

If you're using the SDK, using ContractCallQuery() is better suited for read-only queries. See sample below:
// Query the contract to check changes in state variable
const contractQueryTx1 = new ContractCallQuery()
.setContractId(contractId)
.setGas(100000)
.setFunction("get_message";
const contractQuerySubmit1 = await contractQueryTx1.execute(client);
Note that the SDK still requires some small amount of gas.
There are a couple of other ways to do cost-free queries.
Use mirror nodes. These two tutorials can give you additional information on working with mirror nodes: https://hedera.com/blog/how-to-inspect-smart-contract-transactions-on-hedera-using-mirror-nodes and https://hedera.com/blog/how-to-look-up-transaction-history-on-hedera-using-mirror-nodes-back-to-the-basics
If you use Hashio (https://swirldslabs.com/hashio/) as a JSON-RPC relay, then you can use EVM tooling to deploy and interact with contracts on Hedera. Then you can simply call contracts the way you would in a chain like Ethereum. Here are some examples: https://github.com/hashgraph/hedera-json-rpc-relay/tree/main/tools

Related

Standalone alternative to express-session for use in serverless context (w/ DynamoDB)

Background: why use cookies with Lambdas?
OWASP is very clear that cookies are the best option for session management:
.. cookies .. are one of the most extensively used session ID exchange mechanisms, offering advanced capabilities not available in other methods.
However AWS's API Gateway literature often talks about using JWTs for authentication rather than cookies. While some tech blogs out there seem to think it's ok to use JWTs in this way, there are definitely recognised issues with JWTs. Two issues of particaular note are:
(a) you can't easily invalidate a JWT. At best you can keep a server-side database of blocked JWTs and make sure any service validating JWTs also implements a check against this block list. That sounds a lot like implementing regular old sessions, largely defeating the point of using JWTs.
(b) if your want to use JWTs for authorization as well as authentication, you'll run into issues when you need to update the authorization and it's not a change being driven by the enduser themselves. Scenarios in this category include: a system administrator or account manager changes the user's access level; a trial/contract is ended by a cron job; a webhook is triggered by a 3rd party SaaS integration (e.g. Stripe). You might say, "in that case use a separate mechanism for authorization", but then again you're back to good old sessions.
To be clear, I understand the value of JWTs in letting one server communicate its trust in a user's identity to another server, but that's a very different purpose to session management.
Session management in Node
All roads seem to lead to express-session as the most battle-tested implementation of sessions in Node. It offers a wide range of storage options to choose from*.
In the context of Lambdas, in principle you could try and use express-session as though it were just a function factory, for functions with the signature (req,res,next)=>void, that's rather hacky, and in no way recommended by express-sessions. It's also not entirely clear how best to match that call signature to the objects you get in an AWS Lambda, nor which storage mechanisms are optimised for lambdas (which are ephemeral and need to start quickly).
I would really like a lightweight node module that lets you do something like the following:
import {Sessions} from 'sessions';
// configure session management. Should be super lightweight for use in Lambda.
const sessions = new Sessions({
/* ..basic cookie & expiration config, */
secret: "something", // extra security recommended by express
store: { // object with following interface:
createSession(sessionId, metadata),
getFromSessionId(sessionId),
updateSession(sessionId, metadata),
customIndexedProperties: ['userId'], // in addition to sessionId
getSessionIdsFromIndexedProperty(propertyName, propertyValue),
}
});
// create session. Note the api is not opinionated about response header mechanics.
response.headers['set-cookie'] = await sessions.createCookieForSession({
userId: 'user1',
/*...other user info */
});
// get user's session. Again not opinionated about where cookie comes from.
const userSessionInfo = await sessions.getSessionFromCookie(request.header['cookies']);
// update a user's session, but not initiated by the user themselves
const sessionIds = await sessions.getSessionIdsFromIndexedProperty('userId', 'user1');
await sessions.updateSession(sessionIds[0], {something: 'has change'});
Questions
Is my above thinking reasonable?
Are there any node packages I've not encountered that might be helpful.
If not, why not? I have come across a few people with closely related problems, but it must be a fairly common issue when working with serverless.
If I were to implement a module to my own liking how do I get any confidence that I've done a good job security-wise? I could use pieces of express-session that are relevant, but that's not a great long-term solution for good security.
Related to 4, if I were to try and hook into express-sessions, but just build my own store that does what I need, how would I get confidence in the security? Also, I haven't managed to find any docs on what the official api is for an express-session store, which I find amazing given that express-sessions seems to be the go to for sessions in Node.
Any help would be massively appreciated, thanks!
P.S. I appreciate a lot of what I'm discussing relates to open source projects that are often poorly funded. However, I was very surprised to have reached the above conclusions about the state of the ecosystem and wondered if I was missing something.
*Annoyingly, the suggested DynamoDb store package isn't great. Two of the features we'd want from it are not support, indeed PRs seem to have been opened back in 2019 but never looked at by the maintainer. Technically we don't absolutely have to use DynamoDb as our store, but it is does offer a lot of features we like.

In minidriver, is the value of hSCardCtx and hScard in PCARD_DATA can be changed after called CardAcquireContext?

I am working on a project related minidriver to perform operations in smart card.
I have register smartcard in registry with proper ATR and minidriver information.
Now I am trying to generate keypair using CNG -> minidriver -> smart card.
To achieve this I have called NcryptOpenStorageProvider from test application which returns success.
Now when I call NCryptCreatePersistedKey and NCryptFinalizeKey it can't communicate with smartcard.
In minidriver it calls "CardAuthenticateEX" and fails in SCardTransmit, though the previous commands for finding path and searching objects like CardGetProperty, CardReadFile can communicate with smart card successfully.
Yes, the value of hSCardCtx or hScard fields of CARD_DATA can change after CardAcquireContext is called. So one should never store these handles to use them in subsequent functions calls but rather each minidriver function should retrieve these handles from its PCARD_DATA parameter. Failing to do so will cause issues like the one you are describing.

Best practice for sending large messages on ServiceBus

We need to send large messages on ServiceBus Topics. Current size is around 10MB. Our initial take is to save a temporary file in BlobStorage and then send a message with reference to the blob. The file is compressed to save upload time. It works fine.
Today I read this article: http://geekswithblogs.net/asmith/archive/2012/04/10/149275.aspx
The suggestion there is to split the message in smaller chunks and on the receiving side aggregate them again.
I can admit that is a "cleaner approach", avoiding the roundtrip to BlobStore. On the other hand I prefer to keep things simple. The splitting mechanism introduces increased complexity. I mean there must have been a reason why they didn't include that in the ServiceBus from the beginning ...
Has anyone tried the splitting approach in real life situation?
Are there better patterns?
I wrote that blog article a while ago, the intention was to implement
the splitter and aggregator patterns using the Service Bus. I found this question by chance when searching for a better alternative.
I agree that the simplest approach may be to use Blob storage to store the message body, and send a reference to that in the message. This is the scenario we are considering for a customer project right now.
I remember a couple of years ago, there was some sample code published that would abstract Service Bus and Storage Queues from the client application, and handle the use of Blob storage for large message bodies when required. (I think it was the CAT team at Microsoft, but I'm not sure).
I can't find the sample with a Quick google search, but as it's probably a couple of years old, it will be out of date, as the Service Bus client library has been improved a lot since then.
I have used the splitting of messages when the message size was too large, but as this was for batched telemetry data there was no need to aggregate the messages, and I could just process a number of smaller batches on the receiving end instead of one large message.
Another disadvantage of the splitter-aggregator approach is that it requires sessions, and therefore a session enabled Queue or Subscription. This means that all messages will require sessions, even smaller ones, and also the Session Id cannot be used for another purpose in the implementation.
If I were you I would not trust the code on the blog post, it was written a long time ago, and I have learned a lot since then :-).
The Blob Storage approach is probably the way to go.
Regards,
Alan
In case someone will stumble in the same scenario, the Claim Check approach would help.
Details:
Implement Claim Check Pattern
Use ServiceBus.AttachmentPlugin (Assuming you use C#. Optionally, you can create your own)
Use extral storage e.g. Azure Storage Account (optionally, you can use other storage)
C# Code Snippet
using ServiceBus.AttachmentPlugin;
...
// Getting connection information
var serviceBusConnectionString = Environment.GetEnvironmentVariable("SERVICE_BUS_CONNECTION_STRING");
var queueName = Environment.GetEnvironmentVariable("QUEUE_NAME");
var storageConnectionString = Environment.GetEnvironmentVariable("STORAGE_CONNECTION_STRING");
// Creating config for sending message
var config = new AzureStorageAttachmentConfiguration(storageConnectionString);
// Creating and registering the sender using Service Bus Connection String and Queue Name
var sender = new MessageSender(serviceBusConnectionString, queueName);
sender.RegisterAzureStorageAttachmentPlugin(config);
// Create payload
var payload = new { data = "random data string for testing" };
var serialized = JsonConvert.SerializeObject(payload);
var payloadAsBytes = Encoding.UTF8.GetBytes(serialized);
var message = new Message(payloadAsBytes);
// Send the message
await sender.SendAsync(message);
References:
https://learn.microsoft.com/en-us/azure/architecture/patterns/claim-check
https://learn.microsoft.com/en-us/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/
https://www.enterpriseintegrationpatterns.com/patterns/messaging/StoreInLibrary.html
https://github.com/SeanFeldman/ServiceBus.AttachmentPlugin
https://github.com/mspnp/cloud-design-patterns/tree/master/claim-check/code-samples/sample-3

Subscriptions in Paymill

A client with a subscription "Large" (recurring payment).
I create a payment and offer object for doing that, and it works.
Now I want to update that subscription to "Small" (a different name and amount) but without updating the credit card.
The paymill flow for doing this is very documented very vague and it's uncertain what the process is.
Have you done anything like this with Paymill, I would be happy to hear about what calls you are doing.
I am using the .NET wrapper
To change an offer (plan) of a running subscription there are 3 types you can use.
Here is the documentation: https://developers.paymill.com/en-gb/subscription-v2-workflow/#update-sub-plan
In .NET they are exposed with the three methods, that are documented:
ChangeOfferKeepCaptureDateAndRefundAsync
ChangeOfferKeepCaptureDateNoRefundAsync
ChangeOfferChangeCaptureDateAndRefundAsync

Porting PHP API over to Parse

I am a PHP dev looking to port my API over to the Parse platform.
Am I right in thinking that you only need cloud code for complex operations? For example, consider the following methods:
// Simple function to fetch a user by id
function getUser($userid) {
return (SELECT * FROM users WHERE userid=$userid LIMIT 1)
}
// another simple function, fetches all of a user's allergies (by their user id)
function getAllergies($userid) {
return (SELECT * FROM allergies WHERE userid=$userid)
}
// Creates a script (story?) about the user using their user id
// Uses their name and allergies to create the story
function getScript($userid) {
$user = getUser($userid)
$allergies = getAllergies($userid).
return "My name is {$user->getName()}. I am allergic to {$allergies}"
}
Would I need to implement getUser()/getAllergies() endpoints in Cloud Code? Or can I simply use Parse.Query("User")... thus leaving me with only the getScript() endpoint to implement in cloud code?
Cloud code is for computation heavy operations that should not be performed on the client, i.e. handling a large dataset.
It is also for performing beforeSave/afterSave and similar hooks.
In your example, providing you have set up a reasonable data model, none of the operations require cloud code.
Your approach sounds reasonable. I tend to put simply queries that will most likely not change on the client side, but it all depends on your scenario. When developing mobile apps I tend to put a lot of code in cloud code. I've found that it speeds up my development cycle. For example, if someone finds a bug and it's in cloud code, make the fix, run parse deploy, done! The change is available to all mobile environments instantly!!! If that same code is in my mobile app, it really sucks, cause now I have to fix the bug, rebuild, push it to the app store/google play, wait x number of days for it to be approved, have the users download it... you see where I'm going here.
Take for example your
SELECT * FROM allergies WHERE userid=$userid query.
Even though this is a simple query, what if you want to sort it? maybe add some additional filtering?
These are the kinds of things I think of when deciding where to put the code. Hope this helps!
As a side note, I have also found cloud code very handy when needing to add extra security to my apps.