Truffle migrate --reset without loosing data - migration

I am developing a smart contract using solidity. Every time I edit the contract and want to prove it I run truffle migrate --reset.
By doing so, I lose all the data that I had saved in the contract.
I wonder if there is any way to migrate the contract while preserving the data, as it is done with traditional databases, since only by truffle migrate the contract is not migrated, it is only recompiled.
Thanks a lot!

Short answer: Currently not possible using Truffle. But if you know what you're doing, you could keep the storage with more low-level approach.
By running truffle migrate you are usually running a javascript code that deploys the Solidity contract using the standard CREATE EVM opcode. So every time you're running truffle migrate, Truffle deploys the contract to a new address that has empty storage slots.
Even though it's possible to redeploy a smart contract to an already used address using CREATE2 opcode (and to keep the original storage data), Truffle currently doesn't support this option.
Note: The --reset option only runs all migrations from the beginning in case of previous failure, but has no effect on the storage or the contract address.
You could achieve this goal (keeping the storage) by setting the (contract deployment) transaction data to an instruction to deploy to an address that already contains a self-destruct contract. But it's much more low-level approach than Truffle currently allows. If you want to learn more about this technique, these articles would be a good start:
https://blog.ricmoo.com/wisps-the-magical-world-of-create2-5c2177027604
https://medium.com/#jason.carver/defend-against-wild-magic-in-the-next-ethereum-upgrade-b008247839d2

Related

How to transfer user data from production to preproduction?

We have a payroll system where users input their worked hours. There is a database of users with names, emails, addresses, input hours etc. We want to transfer that data to preprod for testing purposes.
My question is: How should we go about transferring personal data in compliance with GDPR? Should we absolutely replace the user data or there are other ways?
Does the data in a preprod env has to be absolutely equal to the data in prod, if that isnt the case you can use a library to fill your preprod database a library like faker can do the job.
Concerning GDPR, if the both database are hosted by the same service, same server same timezone, you are not breaking in rules but if a user that requests his data to be removed, it has to be removed in Production and Preprod env.
A pre-production environment should by definition be as near as humanly possible to the production environment. As long as you have the same privacy protections in that environment as in the production environment (restrictions to need-to-know, ability to extract data at request, ability to purge data at request, and so on), I don't see why GDPR should be a hindrance. You need to have measures in place to ensure the data-in-motion part doesn't fall into the wrong hands, of course: sometimes, encryption actually is the answer to security issues!
IANAL, of course: there sometimes seem to be about as many interpretations of GDPR as there are citizens in the EU…

How to break down terraform state file

I am looking for guidance/advice on how to best break down a terraform state file into smaller state files.
We currently have one state file for each environment and it has become unmanageable so we are now looking to have a state file per terraform module so we need to separate out the current state file.
Would it be best to point it to a new s3 bucket, then run a plan and apply for the broken down modules and generate a fresh state file for each module or is there an easier or better way to achieve this?
This all depends upon how your environment has been provisioned and how critical the down time is ?
Below are the two general scenarios, I can think of from your question.
First Scenario - ( if you can take down time )
Destroy everything that you got and start from scratch by defining separate backend for each module and provision the infrastructure from that point on. So now you can have backend segregation and infrastructure management becomes easier.
Second Scenario - ( If you can't take down time )
Lets' say you are running mission critical workloads that absolutely can't take any down time.
In this case, you will have to come up with proper plan of migrating huge monolith backend to smaller backends.
Terraform has the command called terraform state mv which can help you with migrating one terraform state to another one.
When you work on the scenario, start from lower level environments and work from there.
Note down any caveats that you are encountering during these migration in lower level environments, the same caveats will apply in higher level environments as well
Some useful links
https://www.terraform.io/docs/cli/commands/state/mv.html
https://www.terraform.io/docs/cli/commands/init.html#backend-initialization
Although the only other answer (as of now) lists only two options - the other option is that you can simply make terraform repos (or folders, however you are handling your infrastructure) - and then do terraform import to bring existing infrastructure into those (hopefully) repos.
Once all of the imports have proven to be successful, you can remove the original repo/source/etc. of the monolithic terraform state.
The caveat is that the code for each of the new state sources must match the existing code and state, otherwise this will fail.

Spring Cloud Function on Google Cloud + SQL

I am trying to build a web app on Google Cloud platform and I want to make it as cheap as possible. I’m not expecting high load on my application so I don’t want to run a Compute instance because most of the time it will be idle. So I decided to try Cloud functions.
The scenario:
Webhook sends http request to Cloud function.
Cloud function connects to database, creates a record and send the message to pub/sub topic.
The message from pub/sub topic may be processed by another app, it doesn’t matter now.
The questions are:
a) Is this a valid scenario for Google Cloud function to connect to SQL instance? I tried to find a sample of some function connecting to database but there’s nothing. However GCP docs explain how to connect to GCP SQL instance from function.
b) Is this a good idea to use Java as runtime and Spring Boot Framework for this purpose? I don’t want to write the code on pure JDBC, however using jpa library may lead to huge cold boot time.
Thanks
Cloud Functions is single purpose. Your process is clearly single purpose. Cloud Functions is the right choice.
However, Java Cloud Functions is a really fresh beta (only 10 days). So, Google Cloud beta are reliable but if you look for a service quickly in GA, Java is not the right choice for this.
If GA is a requirement, 2 alternatives:
Use Cloud Run (very similar to Cloud Functions and with the "same price" (at least for your case)). I wrote an article on this
Use another language (Go, Python, Node)
No, your question about the cold start is real. I'm a spring boot fan and, and I switch from Java to Python (and then to Go, I don't like dynamic type language) because of Cold start. My first pain was on Cloud Run because I was an Alpha tester and I wrote this article.
Spring is a CPU and Memory monster. The cold start are awful. The trade off of an easy to use framework. Today, you can set 2CPU in Cloud Run or set a min instance if you want to minimize this cold start, but it's not free!
So, your process seems very simple.
Does a strong framework like Spring is required for "only this"? Raw SQL works well, JPA is not always the right solution!
Did you think about micronaut alternative? The annotation and the behavior is very close to Spring but there is no dynamic loading and thus a quick start.
Did you consider any other languages? Java can be quick (start and processing), but in any cases, it costs in memory usage (250Mb VS 15Mb in Go for the same hello world). For a simple development, it's a good playground for testing new things. And, because of the small size, it will be easy to maintain by anyone who doesn't know the language.
Happy coding!

Is blockchain a decentralised database?

I understand bitcoin uses blockchain technology to maintain a decentralised ledger of all transactions. I have also read many posts eluding to future applications of blockchain technology, none of which have been very clear to me.
Is blockchain technology simply a decentralised database with consensus validation of the data? If this was the case surely the db would grow to be too large to be effectively decentralised?
To help me understand, can anyone point me to a clear example of a non-bitcoin blockchain application?
Yes, its true that the blockchain database would grow overtime which is what is called "blockchain bloat". Currently the blockchain growth of Bitcoin is roughly less than 100mb a day. Today (2016) the bitcoin blockchain takes up about 60-100GB of space which took about 6 years to accumulate. It is indeed growing faster, but also limited by the blocksize cap of 1MB per block (every 10 minutes). Some proposed solutions have been:
SPV nodes: This is how your phone doesn't need to download the entire blockchain, but retrieve its data from SPV nodes that have the entire blockchain.
Lightning network - This is how Bitcoin can overcome the 1MB memory cap.
Those are just some of the solutions for bitcoin that I know of, as for altcoin related solutions. NXT/Ardor has implemented the solution of pruned data, because NXT/Ardor gives the ability to upload arbitrary data and messages onto its blockchain, the bloat is much more apparent in this scenario. The NXT/Ardor blockchain has the ability to delete previous data every 2 weeks and only keep the hash of its data on the blockchain which only takes a few KB. They also have the ability to retain all of the blockchain data with the pruning turned off which would mark a node as an Archival Node and other nodes can replicate this node and become an Archival node.
From my understanding NXT/Ardor has been one of the few blockchains that has production ready decentralized data storage system, marketplace, stock exchange, and messaging system built into its blockchain.
Blockchain is not just a decentralised database, but it is much more than that. While the original Bitcoin blockchain allowed only value to be transferred, along with limited data with every transaction, several new blockchains have been developed in the past 2-3 years, which have much more advanced native scripting and programming capabilities.
Apart from the Bitcoin blockchain, I would say that there a few other major blockchains like Ethereum, Ripple, R3's Corda, Hyperledger. Although Ethereum has a crypto-currency called Ether, it is actually a Turing complete EVM (Ethereum Virtual Machine). Using Ethereum, you can create Smart Contracts that would themselves run in a decentralised manner. As a developer, it opens up completely new avenues for you and changes your perspective of writing programs. While Ripple is mainly geared towards payments, Corda and Hyperledger are built with a view of being private/permissioned blockchains, to solve the issues such as scalability, privacy, and identity. The target markets for Hyperledger and Corda are mostly banks and other Financial Institutions.
As for the non-bitcoin application of blockchain, you can certainly look at some companies like Consensys (multiple different use cases on blockchain), Digix Global (gold tokens on the blockchain), Everledger (tracking of diamonds on the blockchain), Otonomos (Company registration on the blockchain), OT Docs (Trade Finance and document versioning on the blockchain) amongst others.
Blockchain is:
Name for a data structure,
Name for an algorithm,
Name for a suite of Technologies,
An umbrella term for purely distributed peer-to-peer systems with a common application area,
A peer-to-peer-based operating system with its own unique rule set that utilizes hashing to provide unique data transactions with a distributed ledger
Blockchain is much more than a "database". Yes the blocks on the chain stores data but it is more like a service. There are many applications of blockchain. Read about them: here. If you want to see the code of a blockchain application, try this one: here.
Blockchain is combination of p2p network, decentralised database and asymmetric cryptography
P2P network means you can transfer data between two deferent network nodes without any middleman, decentralised db means every nodes of network has one replica of network db and asymmetric cryptography means you can use digital signature to validate the authenticity and integrity of a messages

Updating Redis Click-To-Deploy Configuration on Compute Engine

I've deployed a single micro-instance redis on compute engine using the (very convenient) click-to-deploy feature.
I would now like to update this configuration to have a couple of instances, so that I can benchmark how this increases performance.
Is it possible to modify the config while it's running?
The other option would be to add a whole new redis deployment, bleed traffic onto that over time and eventually shut down the old one. Not only does this sound like a pain in the butt, but, I also can't see any way in the web UI to click-to-deploy multiple clusters.
I've got my learners license with all this, so would also appreciate any general 'good-to-knows'.
I'm on the Google Cloud team working on this feature and wanted to chime in. Sorry no one replied to this for so long.
We are working on some of the features you describe that would surely make the service more useful and powerful. Stay tuned on that.
I admit that there really is not a good solution for modifying an existing deployment to date, unless you launch a new cluster and migrate your data over / redirect reads and writes to the new cluster. This is a limitation we are working to fix.
As a workaround for creating two deployments using Click to Deploy with Redis, you could create a separate project.
Also, if you wanted to migrate to your own template using the Deployment Manager API https://cloud.google.com/deployment-manager/overview, keep in mind Deployment Manager does not have this limitation, and you can create multiple deployments from the same template in the same project.
Chris