I am new to blockchain, please help me understand.
How the smart contract's state variables are stored on the blockchain if a smart contract is immutable (because it was deployed as a transaction = byte-code is stored in a transaction)?
Ok, maybe every new state of a state variable is stored with a new method-update call (set) in the transaction, but how then does the smart contract know how to address them if it was created earlier?
And here I found a mention of a state storage on every EVM. "Technically you don’t need to store this on disk, you could just play back all transactions when you boot up the node" - again, how it is possible to play back all transactions related to a contract, how are they connected to a contract?
Immutability applies only to data placed directly in the blockchain, that is, to transaction data. In Ethereum, the values of smart contract variables are determined specifically by each node when processing a transaction on its EVM instance.
As for, for example, Hyperledger Fabric, the final results of calculations are also transmitted along with the transaction, and the node simply records them in its state database. But at the same time, he himself determines whether to accept or not to accept this transaction.
Related
For NFT minting - Standard says to emit transfer event with value of from as zero address. But i was wondering a dedicated event let's say mint sound more better and clear.
event Mint(address to, string tokenId)
Also it gives us advantage of one more variable to be indexed in event as max three value can be indexed.
Can anyone please clear this ? What is best way?
Pasted from a comment below my other answer answering a question
why not to use dedicated mint event ?
I can't speak for the authors and reviewers of the ERC-721 standard, why they chose this specific way. But from my understanding, it was already a common practice to emit Transfer event log with zero sender address when minting ERC-20 tokens, when they were creating the 721 standard. So one of the reasons might have been reusability of code for offchain apps such as blockchain explorers, to be able to handle token minting in a more generalized way.
To add context to your more specific question about the advantage of being able to pass more values:
Apart from Transfer, you can also emit other event logs, including this arbitrary Mint as well, when you're minting new tokens.
Since this Mint event is not standardized, it will not be recognized by most offchain apps (such as Etherscan) as token mint. They will only show it on the transaction detail page as "some event named Mint that we don't recognize", but their internal aggregated database of "who owns which tokens" and "these tokens were minted during this transaction" will still reflect only the values passed to the Transfer event.
However, you'll be able to handle this arbitrary event from your own offchain apps.
I was not able to find anyone invoking this subject, which means that it probably is not feasible, but I need to make sure.
So is there any way one could call some endpoint from a smart contract?
No it is not, by protocol. Accepting such thing would be a hole security. Smart contracts are meant to deal with data on the blockchain itself in an isolated and controlled environment.
Disclaimer: this is not only for solana but for most blockchains (cannot say for sure if for all of them, but it would make sense it would).
No. calling external links on chain, requires oracle services such as Chainlink
which doesn't seem to be available on solana, as of now.
chainlink data feeds are available on solana.
https://docs.chain.link/docs/solana/data-feeds-solana/
Blockchains are deterministic, that means If I take whole history of transactions which are stored on nodes, and go through them I should get the same state.
The result of any transaction must always be the same for nodes to
verify it no matter where, how and when we call it
In smart contract, oracle services are used. What is blockcahin Oracle:
Blockchain oracles are entities that connect blockchains to external
systems, thereby enabling smart contracts to execute based upon inputs
and outputs from the real world. Oracles provide a way for the
decentralized Web 3.0 ecosystem to access existing data sources,
legacy systems, and advanced computations.
I am trying to learn the blockchain technology and implement it in our existing product ecosystem. Bitcoin is the best example of blockchain technology.
I have read multiple articles on the internet and found that blockchain is a distributed ledger and every participant in it will maintain a copy of the ledger.
There are around millions of people holding bitcoins.
So all these customers maintain copy of ledger at their end ?
And when we make a new bitcoin transaction we need every other customer's consensus to make transaction successful ?
Or is it that there are a set of bitcoin minors who act on behalf of these customer's and maintain these ledger at their end ?
A Blockchain is a network where everyone is at the same level, i.e. there isn't a centralized authority that holds the system. So, everyone stores all the information.
And when we make a new bitcoin transaction we need every other
customer's consensus to make transaction successful ? Or is it that there are a set of bitcoin minors who act on behalf of these customer's and maintain these ledger at their end ?
The consensus is reached among all the participants of the Blockchain. In the case of Bitcoin, the mechanism used for it is the Proof of Work (PoW).
When a node of Bitcoin gets a new transaction, it verifies the transaction. If all is ok, the node broadcast the transaction to the rest of the nodes of Bitcoin. Every node, when it receives a new transaction, validates it.
The nodes of Bitcoin are miners. They are trying to generate a block. That block will store many transactions. When a node generates a block, the block is broadcast to the rest of the Bitcoin nodes.
When a node receives a new block, it verifies the block, i.e, verifies that all the transactions are valid and the block is correct. If so, the node adds the new block to its blockchain.
All the nodes store the same Blockchain. Also, all the nodes verify all the transactions and all blocks.
I have been reading about blockchain and ethereum, but I cannot seem to get my head around a couple of concepts.
First, where in the blockchain is a newly created transaction stored ? So if the blockchain has been on going for some time, and lets say we are on block X right now. If I deploy my contract today, and it gets executed, will my transaction details reside on every block after block X or only on block X + 1 ? And will my transaction details be the only details on that block, or will that block contain every transaction that happened within that time period ? Again, do all of the prior blocks transactions get written to subsequent blocks as well ? What happens if more than 1 transaction gets executed from the same contract, is just written as 2 different blocks or within the same block ?
Second, when designing a contract I have seen that it is usually restricted to two parties that enter into it, and for other people to use the contract a new instance of the contract has to be created, is this understanding correct ? Or should 1 contract be designed in a way that everyone uses it and only one instance of it is ever created ?
where in the blockchain is a newly created transaction stored ?
In the blocks that each node stores.
will my transaction details reside on every block after block X or
only on block X + 1 ?
Each transaction only resides in a block. Each block is related with the previous block, so your transaction is throughout all the blockchain.
And will my transaction details be the only details on that block, or
will that block contain every transaction that happened within that
time period ?
It dependes on the implementation of Blockchain. For example, Bitcoin blocks store all the transactions that have been sent throughout 10 minutes, because each block is mined every 10 minutes (more or less).
Second
Blockchain is a distributed system, where all the members are at the same level. So, they get the consensus about what they are going to do, i.e. all the members have to agree about the functions of their Blockchain.
For each Blockchain, you can have more than one contract. But I'm going to explain it more simply. The Smart Contract is the code that is installed on all the nodes of the Blockchain, and every request is executed against it. So, every node/member must have the same.
What is the best way to achieve DB consistency in microservice-based systems?
At the GOTO in Berlin, Martin Fowler was talking about microservices and one "rule" he mentioned was to keep "per-service" databases, which means that services cannot directly connect to a DB "owned" by another service.
This is super-nice and elegant but in practice it becomes a bit tricky. Suppose that you have a few services:
a frontend
an order-management service
a loyalty-program service
Now, a customer make a purchase on your frontend, which will call the order management service, which will save everything in the DB -- no problem. At this point, there will also be a call to the loyalty-program service so that it credits / debits points from your account.
Now, when everything is on the same DB / DB server it all becomes easy since you can run everything in one transaction: if the loyalty program service fails to write to the DB we can roll the whole thing back.
When we do DB operations throughout multiple services this isn't possible, as we don't rely on one connection / take advantage of running a single transaction.
What are the best patterns to keep things consistent and live a happy life?
I'm quite eager to hear your suggestions!..and thanks in advance!
This is super-nice and elegant but in practice it becomes a bit tricky
What it means "in practice" is that you need to design your microservices in such a way that the necessary business consistency is fulfilled when following the rule:
that services cannot directly connect to a DB "owned" by another service.
In other words - don't make any assumptions about their responsibilities and change the boundaries as needed until you can find a way to make that work.
Now, to your question:
What are the best patterns to keep things consistent and live a happy life?
For things that don't require immediate consistency, and updating loyalty points seems to fall in that category, you could use a reliable pub/sub pattern to dispatch events from one microservice to be processed by others. The reliable bit is that you'd want good retries, rollback, and idempotence (or transactionality) for the event processing stuff.
If you're running on .NET some examples of infrastructure that support this kind of reliability include NServiceBus and MassTransit. Full disclosure - I'm the founder of NServiceBus.
Update: Following comments regarding concerns about the loyalty points: "if balance updates are processed with delay, a customer may actually be able to order more items than they have points for".
Many people struggle with these kinds of requirements for strong consistency. The thing is that these kinds of scenarios can usually be dealt with by introducing additional rules, like if a user ends up with negative loyalty points notify them. If T goes by without the loyalty points being sorted out, notify the user that they will be charged M based on some conversion rate. This policy should be visible to customers when they use points to purchase stuff.
I don’t usually deal with microservices, and this might not be a good way of doing things, but here’s an idea:
To restate the problem, the system consists of three independent-but-communicating parts: the frontend, the order-management backend, and the loyalty-program backend. The frontend wants to make sure some state is saved in both the order-management backend and the loyalty-program backend.
One possible solution would be to implement some type of two-phase commit:
First, the frontend places a record in its own database with all the data. Call this the frontend record.
The frontend asks the order-management backend for a transaction ID, and passes it whatever data it would need to complete the action. The order-management backend stores this data in a staging area, associating with it a fresh transaction ID and returning that to the frontend.
The order-management transaction ID is stored as part of the frontend record.
The frontend asks the loyalty-program backend for a transaction ID, and passes it whatever data it would need to complete the action. The loyalty-program backend stores this data in a staging area, associating with it a fresh transaction ID and returning that to the frontend.
The loyalty-program transaction ID is stored as part of the frontend record.
The frontend tells the order-management backend to finalize the transaction associated with the transaction ID the frontend stored.
The frontend tells the loyalty-program backend to finalize the transaction associated with the transaction ID the frontend stored.
The frontend deletes its frontend record.
If this is implemented, the changes will not necessarily be atomic, but it will be eventually consistent. Let’s think of the places it could fail:
If it fails in the first step, no data will change.
If it fails in the second, third, fourth, or fifth, when the system comes back online it can scan through all frontend records, looking for records without an associated transaction ID (of either type). If it comes across any such record, it can replay beginning at step 2. (If there is a failure in step 3 or 5, there will be some abandoned records left in the backends, but it is never moved out of the staging area so it is OK.)
If it fails in the sixth, seventh, or eighth step, when the system comes back online it can look for all frontend records with both transaction IDs filled in. It can then query the backends to see the state of these transactions—committed or uncommitted. Depending on which have been committed, it can resume from the appropriate step.
I agree with what #Udi Dahan said. Just want to add to his answer.
I think you need to persist the request to the loyalty program so that if it fails it can be done at some other point. There are various ways to word/do this.
1) Make the loyalty program API failure recoverable. That is to say it can persist requests so that they do not get lost and can be recovered (re-executed) at some later point.
2) Execute the loyalty program requests asynchronously. That is to say, persist the request somewhere first then allow the service to read it from this persisted store. Only remove from the persisted store when successfully executed.
3) Do what Udi said, and place it on a good queue (pub/sub pattern to be exact). This usually requires that the subscriber do one of two things... either persist the request before removing from the queue (goto 1) --OR-- first borrow the request from the queue, then after successfully processing the request, have the request removed from the queue (this is my preference).
All three accomplish the same thing. They move the request to a persisted place where it can be worked on till successful completion. The request is never lost, and retried if necessary till a satisfactory state is reached.
I like to use the example of a relay race. Each service or piece of code must take hold and ownership of the request before allowing the previous piece of code to let go of it. Once it's handed off, the current owner must not lose the request till it gets processed or handed off to some other piece of code.
Even for distributed transactions you can get into "transaction in doubt status" if one of the participants crashes in the midst of the transaction. If you design the services as idempotent operation then life becomes a bit easier. One can write programs to fulfill business conditions without XA. Pat Helland has written excellent paper on this called "Life Beyond XA". Basically the approach is to make as minimum assumptions about remote entities as possible. He also illustrated an approach called Open Nested Transactions (http://www.cidrdb.org/cidr2013/Papers/CIDR13_Paper142.pdf) to model business processes. In this specific case, Purchase transaction would be top level flow and loyalty and order management will be next level flows. The trick is to crate granular services as idempotent services with compensation logic. So if any thing fails anywhere in the flow, individual services can compensate for it. So e.g. if order fails for some reason, loyalty can deduct the accrued point for that purchase.
Other approach is to model using eventual consistency using CALM or CRDTs. I've written a blog to highlight using CALM in real life - http://shripad-agashe.github.io/2015/08/Art-Of-Disorderly-Programming May be it will help you.