Solidity -- using uninitialized storage pointers safely - optimization

I'm trying to gas-optimize the following solidity function by allowing users to sort the array they pass in such a way that I can perform less storage reads. However, to do this I need to use an uninitialized storage pointer -- which the compiler doesnt let me do (^0.8.0). How can I safely use an uninitialized storage pointer and have it be accepted by the compiler?
function safeBatchReleaseCollaterals(
uint256[] memory bondIds,
uint256[] memory collateralIds,
address to
) public {
// 'memoization' variables
uint256 lastAuthorizedBond = 2**256 - 1;
uint256 lastCurrencyRef = 2**256 - 1;
Currency storage currency;
for (uint256 i = 0; i < bondIds.length; i++) {
uint256 bondId = bondIds[i];
// check if we authorized this bond previously?
if (lastAuthorizedBond != bondId) {
require( // expensive check. Reads 2 slots!!
_isAuthorizedToReleaseCollateral(bondId, msg.sender),
"CollateralManager: unauthorized to release collateral"
);
lastAuthorizedBond = bondId;
}
uint256 collateralId = collateralIds[i];
Collateral storage c = collateral[bondId][collateralId];
// check if we read this Currency previously?
if (lastCurrencyRef != c.currencyRef) {
currency = currencies[c.currencyRef]; // expensive 1 slot read
lastCurrencyRef = c.currencyRef;
}
_transferGenericCurrency(currency, address(this), to, c.amountOrId, "");
emit CollateralReleased(bondId, collateralId, to);
}
}
As a quick explanation of the structure: this is similar to a batch erc1155 transfer, except I'm storing a lot of data related to the transaction in some slots under the Currency and Collateral and Bond objects. Since the reads can get intensive, I want to optimize gas by caching reads. Since having an actual cache map is also expensive, I instead optimize by caching only the previous list item, and rely on the user to sort the array in such a way that results in the smallest gas costs.
The lastAuthorizedBond variable caches which bondId was last authorized -- if it repeats, we can cut out an expensive 2-slot read! (which results in maybe 16% gas savings during tests. You can see, significant). I tried doing something similar with the currency read, storing the lastCurrencyRef and hoping to store the actual result of the read into the currency variable. The compiler complains about this however, and maybe justly so.
Is there a way to pass this by the compiler? Do I just have to ditch this optimization? Though nobody is allowed to register the 2**256-1 currency or bond, is this code even safe?
Note that the collateral entry gets deleted after this runs -- cannot double release.

Related

I need to understand below smart contract code

Can you help me explain below smart contract code I found on tomb finance, tomb.sol contract?
// Initial distribution for the first 24h genesis pools
uint256 public constant INITIAL_GENESIS_POOL_DISTRIBUTION = 11000 ether;
// Initial distribution for the day 2-5 TOMB-WFTM LP -> TOMB pool
uint256 public constant INITIAL_TOMB_POOL_DISTRIBUTION = 140000 ether;
// Distribution for airdrops wallet
uint256 public constant INITIAL_AIRDROP_WALLET_DISTRIBUTION = 9000 ether;
Why do they distribute ether for the pools?
Why ether?
Can they do that?
What exactly is the value of 1 ether?
If they had deployed this on BNB Chain, will this code will change?
This snippet alone doesn't distribute any ether, it only declares 3 constants. It's likely that there's some other functions in the code, that wasn't shared, that make use of these constants.
ether in this case is a Solidity global unit. No matter on which network you deploy the contract, it multiplies the specified number by 10^18 (or 1000000000000000000). Current version of Solidity (0.8) is not able to store decimal numbers, so all native and ERC-20 balances are stored in the smallest units of the token. In case of native tokens (ETH on Ethereum, MATIC on Polygon, ...), that's wei. And 10^18 wei == 1 ETH (or 1 MATIC, etc - depending on the network).
If this code was deployed on other EVM network (such as Binance Smart Chain), the ether unit is the same. It doesn't work with ETH tokens, it "just" multiplies the number.

Why does this solidity function run into gas errors?

I'm trying to figure out some strange behavior. The function below takes in an array like [1,2,3,4,5], loops through it, and looks at another contract to verify ownership. I wrote it like this (taking in a controlled / limited array) to limit the amount of looping required (to avoid gas issues). The weird part (well, to me) is that I can run this a few times and it works great, mapping the unmapped values. It will always process as expected until I run about 50 items through it. After that, the next time it will gas out even if the array includes only one value. So, I'm wondering what's going on here...
function claimFreeNFTs (uint[] memory _IDlist) external payable noReentrant {
IERC721 OGcontract = IERC721(ERC721_contract);
uint numClaims = 0;
for (uint i = 0; i < _IDlist.length; i++) {
uint thisID = _IDlist[i];
require(OGcontract.ownerOf(thisID)==msg.sender, 'Must own token.' );
if ( !claimedIDList(thisID) ) { // checks mapping here...
claimIDset(thisID); // maps unmapped values here;
numClaims++;
}
}
if ( numClaims > 0 ) {
_safeMint(msg.sender, numClaims);
emit Mint(msg.sender, totalSupply());
}
}
Any thoughts / directions appreciated. :-)
Well, there was a bit more to the function, actually. I'd edited out some of what I thought was extraneous, but it turned out my error was in the extra stuff. The above does actually work. (Sorry.) After doing the mint, I was also reducing the supply of a reserve wallet on the contract -- one that held (suprise!) 50 NFTs. So, after this function processed 50, it was making that wallet hold negative NFTs, which screwed things up. Long story, but on Remix, I'd forgotten to set values in the constructor in the proper order, which is how I screwed it up in the first place. Anyway, solved.

"Out of Gas" when calling function twice quickly, but not when calls are spaced out?

I have a smart contract, and one of the functions (queue) is meant to allow users to find "matches" with other users of the smart contract. The logic is that if you call queue and there is nobody waiting, you are now the queued user / wallet address. If you call queue and there is already a queued user, you clear them from the queue and set up the match.
This works fine if the first queue call is a few seconds before the second one, but if both users call queue at the same time, the second one always reverts with an Out of Gas error. Increasing the amount of gas does not solve the issue.
I would appreciate any ideas!
The code fails in the if block. If I remove most of the logic, it succeeds, but I can't figure out any rhyme or reason as to why.
if (awaitingMatch != address(0)) {
userMap[awaitingMatch].opponent = msg.sender;
userMap[awaitingMatch].matchedBlock = block.number;
userMap[awaitingMatch].matchWins = 0;
userMap[awaitingMatch].playAmount = msg.value;
userMap[awaitingMatch].winsNeeded = winsToWin;
userMap[msg.sender].opponent = awaitingMatch;
userMap[msg.sender].matchedBlock = block.number;
userMap[msg.sender].matchWins = 0;
userMap[msg.sender].winsNeeded = winsToWin;
awaitingMatch = address(0);
emit Match(msg.sender);
emit Match(userMap[msg.sender].opponent);
// add this guy to the list awaiting a match, and set his desposit flag true
} else {
awaitingMatch = msg.sender;
}
I think I have figured this out. The issue is that MetaMask tries to estimate the amount of gas that will be used for each transaction. MetaMask is quite good at this, and analyzes the state of the contract before estimating the gas. The if section (run by the second caller) does a lot more work than the else section (run by the first caller). If I make both calls at the same time, they both estimate that they'll run the lighter else section, but one of them will wind up running the first, more expensive if section.
I think my best bet here is to tweak the amount of gas being supplied on any call to a function like this that could do quite different amounts of work depending on the moment the function is called.

Cheapest way to access and modify a struct

I am using solidity 0.8.10.
In my contract I have a state variable struct:
struct Product {
uint id_prod;
address payable producer_addr;
address payable owner_addr;
bool onSale;
}
and a state variable array of products:
Product[] public ProductList;
and a function that allows to modify the attributes of the product. Nothing really complex.
Considering the cost in deploying and using the contract, I think there are two ways of changing the attributes of the product.
Solution 1, by using a storage variable:
Product storage _product = ProductList[_id_product];
_product.owner_addr = payable(msg.sender);
_product.onSale = false;
Solution 2, without a storage variable:
ProductList[_id_product].owner_addr = payable(msg.sender);
ProductList[_id_product].onSale = false;
Which solution is the cheapest, cleanest, most advisable?
First, the numbers:
I deployed a similar contract to yours and called each function. Although implementations can vary,
The transaction costs were:
Solution 1: 28785 gas
Solution 2: 28985 gas
And, deployment costs were:
Solution 1: 275180
Solution 2: 281006
Solution 2 uses 2 sload and 2 sstore operations. It loads from storage each time you want to access the data, and writes to storage each time you assign new value.
Solution 1 uses a storage pointer. So we sload only once and than use the reference.
You can see from the transaction cost differences, that the difference is 200 gas and a single sload operation uses 200 gas as well.
So, both in terms of readability(imho), transaction cost and deployment cost; solution 1 was more efficient. And the more fields you use, the more efficient the solution 1 will become.

Memory ownership in PKCS #11 C_FindObjects where ulMaxObjectCount != 1

The authors of PKCS #11 v2.40 utilize a common pattern when an API returns a variable length list of items. In APIs such as C_GetSlotList and C_GetMechanismList, the application is expected to call the APIs twice. In the first invocation, a pointer to a CK_ULONG is set to the number of items that will be returned on the next invocation. This allows the application to allocate enough memory and invoke the API again to retrieve the results.
The C_FindObjects call also returns a variable number of items, but it uses a different paradigm. The parameter CK_OBJECT_HANDLE_PTR phObject is set to the head of the result list. The parameter CK_ULONG_PTR pulObjectCount is set to the number of items returned, which is ensured to be less than CK_ULONG ulMaxObjectCount.
The standard does not explicitly say that phObject must be a valid pointer to a block of memory large enough to hold ulMaxObjectCount CK_OBJECT_HANDLEs.
One could interpret the standard as meaning that the application must pessimistically allocate enough memory for ulMaxObjectCount objects. Alternately, one could interpret the standard as meaning that the PKCS #11 implementation will allocate pulObjectCount CK_OBJECT_HANDLEs and it is then the application's responsibility to free that memory. This later interpretation seems suspect however, as no where else in the standard does the implementation of PKCS #11 ever allocate memory.
The passage is:
C_FindObjects continues a search for token and session objects that
match a template, obtaining additional object handles. hSession is
the session’s handle; phObject points to the location that receives
the list (array) of additional object handles; ulMaxObjectCount is
the maximum number of object handles to be returned; pulObjectCount
points to the location that receives the actual number of object
handles returned.
If there are no more objects matching the template, then the location
that pulObjectCount points to receives the value 0.
The search MUST have been initialized with C_FindObjectsInit.
The non-normative example is not very helpful, as it sets ulMaxObjectCount to 1. It does, however, allocate the memory for that one entry. Which seems to indicate that the application must pessimistically pre-allocate the memory.
CK_SESSION_HANDLE hSession;
CK_OBJECT_HANDLE hObject;
CK_ULONG ulObjectCount;
CK_RV rv;
.
.
rv = C_FindObjectsInit(hSession, NULL_PTR, 0);
assert(rv == CKR_OK);
while (1) {
rv = C_FindObjects(hSession, &hObject, 1, &ulObjectCount);
if (rv != CKR_OK || ulObjectCount == 0)
break;
.
.
}
rv = C_FindObjectsFinal(hSession);
assert(rv == CKR_OK);
Specification Link: http://docs.oasis-open.org/pkcs11/pkcs11-base/v2.40/pkcs11-base-v2.40.pdf
Yes, it would appear that the application is responsible for allocating space for the object handles returned by C_FindObjects(). The example code does this, even though it only requests a single object handle at a time, and so should you.
You could just as well rewrite the example code to request multiple object handles, e.g. like this:
#define MAX_OBJECT_COUNT 100 /* arbitrary value */
K_SESSION_HANDLE hSession;
CK_OBJECT_HANDLE hObjects[MAX_OBJECT_COUNT];
CK_ULONG ulObjectCount, i;
CK_RV rv;
rv = C_FindObjectsInit(hSession, NULL_PTR, 0);
assert(rv == CKR_OK);
while (1) {
rv = C_FindObjects(hSession, hObjects, MAX_OBJECT_COUNT, &ulObjectCount);
if (rv != CKR_OK || ulObjectCount == 0) break;
for (i = 0; i < ulObjectCount; i++) {
/* do something with hObjects[i] here */
}
}
rv = C_FindObjectsFinal(hSession);
assert(rv == CKR_OK);
Presumably, the ability to request multiple object handles in a single C_FindObjects() call is intended as a performance optimization.
FWIW, this is pretty much exactly how many C standard library functions like fread() work as well. It'd be extremely inefficient to read data from a file one byte at a time with fgetc(), so the fread() function lets you allocate an arbitrarily large buffer and read as much data as will fit into it.