Ethereum: How to deploy slightly bigger smart contracts? - size

How can I deploy large smart contracts? I tried it on Kovan and Ropsten and have issues with both. I have 250 lines of code + all the ERCBasic and Standard files imported.
The size of the compiled bin file is 23kB. Did anybody experience similar problems with this size of contracts?
UPDATE:
It is possible to decrease the compiled contract size by doing the compilation as follows:
solc filename.sol --optimize
In my case it turned 23kB in about 10kB

Like everything else in Ethereum, limits are imposed by the gas consumed for your transaction. While there is no exact size limit there is a block gas limit and the amount of gas you provide has to be within that limit.
When you deploy a contract, there is an intrinsic gas cost, the cost of the constructor execution, and the cost of storing the bytecode. The intrinsic gas cost is static, but the other two are not. The more gas consumed in your constructor, the less is available for storage. Usually, there is not a lot of logic within a constructor and the vast majority of gas consumed will be based on the size of the contract. I'm just adding that point here to illustrate that it is not an exact contract size limit.
Easily, the bulk of the gas consumption comes from storing your contract bytecode on the blockchain. The Ethereum Yellowpaper (see page 9) dictates that the cost for storing the contract is
cost = Gcodedeposit * o
where o is the size (in bytes) of the optimized contract bytecode and Gcodedeposit is 200 gas/byte.
If your bytecode is 23kB, then your cost will be ~4.6M gas. Add that to the intrinsic gas costs and the cost of the constructor execution, you are probably getting close the block gas size limits.
To avoid this problem, you need to break down your contracts into libraries/split contracts, remove duplicate logic, remove non-critical functions, etc.
For some more low level examples of deployment cost, see this answer and review this useful article on Hackernoon.

It is possible to get around the max contract size limitation by implementing the Transparent Contract Standard: https://github.com/ethereum/EIPs/issues/1538

Related

Why transaction cost and execution cost are same in Remix IDE all the time?

I'm recently started to learn how to developing smart contract using solidity in Remix IDE.
I'm using Remix VM (London) environment.
My question is, how can transaction costs and execution costs be the same in all transactions?
I know that transaction cost is the cost of putting data on the blockchain, and execution cost is the cost of executing it.
I'd appreciate your help. Thanks.
example
The transaction and execution cost will be the same every time if the same work is being done.
If you have a function which calculates A+B and returns C then this will always be consistent and the same cost.
If you have a function which saves a string input by the user then the cost will changed based on the size of the string the user inputs.

How to debug and improve memory consumption on PowerBI Embedded service

I am current using PowerBI Embedded service from azure with an A1 unit, which is constantly reaching peak memory consumption and thus causing errors in the visualization of production reports.
1) Is there any way to identify which reports/pages/visuals are consuming the largest share of memory?
2) What would be the overall best strategy (on a high-level, general analysis) to reduce required memory? Would that be reducing the amount of data being loaded, reducing the number of pages, reducing the number of visuals, or any other possible strategy?
You can deploy the report Power BI Premium metrics app, this is for capacities, both Premium and Embedded. It will show dataset memory usage and other metrics on the capacity.
1) Is there any way to identify which reports/pages/visuals are
consuming the largest share of memory?
It will give a good overview of memory usage and whats causing it to time out/evict datasets and reports. Check the link for the full metric lists.
2) What would be the overall best strategy (on a high-level, general
analysis) to reduce required memory? Would that be reducing the amount
of data being loaded, reducing the number of pages, reducing the
number of visuals, or any other possible strategy?
Yes reduce dataset sizes, reports that suck in a number of columns but only use a few of them. Look at badly written queries and data models. For visuals, each visual on a page is a query, each query sucks up memory. I've had issues were people have had 30 visuals on a page, reducing them made it a lot quicker.
Look at the usage, are lots of reports being loaded at once, this can lead to dataset evictions, were it is dumped out of memory as other reports are taking priority. The Metric app will give you some pointers to what is happening, you'll have to take it from there and determine the root cause.
As it is an A sku you can set up an Azure Automation/Logic App to scale up and down the sku, or even pause it when needed. Also A1 & 2 are shared capacity as well not dedicated (A3 onwards) so you may have to account for any noisy neighbour issues in the background, but that will not show up on the metric app.
Hope that helps

What units can be used to benchmark CPU usage? Percentage seems unhelpful

I regularly see coder discussions here about CPU usage and questions about reducing 'high usage', covering everything from Javascript functions to compiled C executables.
I notice that almost always people are referring to the percentage of CPU being consumed - which naturally varies hugely according to where the code is running. eg. "When I run this I get 80% CPU usage, so I need to optimise my code".
Whilst it's clear that a level of 'high CPU usage' for looping code is often a good indicator that something is wrong, and code needs to sleep a little or be refactored, I am very surprised not to be able to find a common unit of processing measurement that is used to describe intense CPU usage rather than the percentage of the author's own machine's CPU, for example.
We can easily measure memory/disk usage by an algorithm on a certain platform, but is there any easily attainable and consistent useful figure for an amount of processing that could be used to compare usage?
Are FLOPS still used in the modern world, for instance?

Clarification in the Ethereum White Paper

I was going through the Ethereum White Paper and it was mentioned that the scripting language implemented in bitcoin blockchain has a limitation of value-blindness and Blockchain-blindness (point 2 and 4 in the paper). I am finding it hard to comprehend what this means. It would be great if someone could help understand this with an example.
Value Blindness:
There is no way for a UTXO script to provide
fine-grained control over the amount that can be withdrawn. For
example, one powerful use case of an oracle contract would be a
hedging contract, where A and B put in $1000 worth of BTC and after 30
days the script sends $1,000 worth of BTC to A and the rest to B. This
would require an oracle to determine the value of 1 BTC in USD[Note
3], but even then it is a massive improvement in terms of trust and
infrastructure requirement over the fully centralized solutions that
are available now. However, because UTXO are all-or-nothing, the only
way to achieve this is through the very inefficient hack of having
many UTXO of varying denominations (eg. one UTXO of 2k for every k up
to 30) and having O pick which UTXO to send to A and which to B.
Blockchain-blindness
UTXO are blind to certain blockchain data such as the nonce and
previous block hash. This severely limits applications in gambling,
and several other categories, by depriving the scripting language of a
potentially valuable source of randomness.

Bloomberg Professional Terminals

does anyone have a way to measure a Bloomberg Terminals general usage or excel API data usage.
This is the most informed way I have seen so far, anyone agree, disagree or have a better way.
https://www.howtogeek.com/howto/43713/how-to-monitor-the-bandwidth-consumption-of-individual-applications/
Thanks
Agree if you're just trying to assess your own server bandwidth usage.
If you're referring to actual data consumption, it depends a little on what platform you're doing it on. You can't infer cost utilization from bandwidth usage, because some fields/data is significantly more expensive, so you may have a person using relatively less data, but incurring notably larger costs due to the entitlements they have requested or are using.
For the enterprise license, you can get a monthly usage summary in arrears for the utilization so it's pretty easy to track.
For B-Pipe you can run BPUR or DFRP on the terminal to run the usage stats, but you need to be permissioned.
On a straight terminal as you mentioned, this data is tracked by Bloomberg to ensure compliance with TOS so you can talk to your rep about getting some usage utilization. The limits team at Bloomberg though sees it binary, so you're either over or you're not.