I've read here that it is possible to mint 2^256 nfts in a single transaction. I've tried to achieve this by directly assigning _owners and _balances mappings but ofc these are private variables so i can't change them. I tried making an _mint() override but that also didn't work. How does this process work?
For simplification, let's do a 10k NFTs scenario.
It's not about invoking a single mint() function 10k times, rather than building your contract logic in a way that allows setting up a range of valid IDs.
Using the MFS part of IPFS, you can upload multiple files into a folder using the same directory ID and actual file names. Example:
https://ipfs.io/ipfs/<dir_id_abc>/1.json
https://ipfs.io/ipfs/<dir_id_abc>/2.json
https://ipfs.io/ipfs/<dir_id_abc>/3.json
etc...
These metadata files contain links to the images.
Your contract can then implement a custom function that shadows an authorized address as an owner of the NFT if both following conditions are met:
The ID is in a valid range (in our case 1-10k)
The NFT is not owned by anybody else (i.e. it's owned by the default address 0x0)
function _exists(uint256 tokenId) override internal view returns (bool) {
if (tokenId >= 1 && tokenId <= 10000) {
return true;
}
return super._exists(tokenId);
}
function ownerOf(uint256 tokenId) override public view returns (address) {
address owner = _owners[tokenId];
// The ID is in a valid range (in our case 1-10k)
// The NFT is not owned by anybody else (i.e. it's owned by the default address 0x0)
if (tokenId >= 1 && tokenId <= 10000 && owner == address(0x0)) {
// shadows an authorized address as an owner
return address(0x123);
}
return super.ownerOf(tokenId);
}
The tokenURI() function then validates the token existence (using the _exists() function) and returns the final URI concatenated from the base URI (https://ipfs.io/ipfs/<dir_id_abc>/), the ID, and the .json suffix.
Mind that this approach does not work on the OpenZeppelin implementation, as their _owners property is private and not readable from child contracts. But you can take this snippet as an inspiration for a custom implementation that allows simulating an arbitrary default owner of 10k (or even 2^256) tokens.
Tbh I don't know how that could be possible without paying ungodly amounts of gas. Why are you trying to mint that many tokens? Are you trying to get all the NFTs in a collection? If so, you'll have to pay the gas costs for every mint regardless.
Related
What I see in several smart contracts, written with solidity, is that a public function is written whose job is just calling another function, which is private or internal.
Here is an example from erc20burnable.sol
In this function _burn is internal, but burn is public.
`
function burn(uint256 amount) public virtual {
_burn(_msgSender(), amount);
}
`
or here is another one in erc1155.sol
`
function safeBatchTransferFrom(
address from,
address to,
uint256[] memory ids,
uint256[] memory amounts,
bytes memory data
) public virtual override {
require(
from == _msgSender() || isApprovedForAll(from, _msgSender()),
"ERC1155: caller is not token owner or approved"
);
_safeBatchTransferFrom(from, to, ids, amounts, data);
}
`
What is the benefit of this structure? why it is common in smart contracts?
Thanks.
One reason for this, I guess, is this way we will be able to override parents, or add modifiers, etc.
It's a common practice used in other OOP languages as well.
One of the reasons is code reusability. If the same snippet (e.g. decrease balance of one address, increase balance of other address, and emit an event) is used in multiple methods (e.g. both transfer() and transferFrom()), you can bundle them into one private function (e.g. _transfer()). And then call this private function from both public functions. When you need to make a change in the code logic, you'll be able to make it in just one place instead of having to search for multiple places and leaving some out by mistake.
Other common reason - you already answered it yourself. This approach allows you to allow the user to specify only some params - for example the amount. But the user cannot specify from which address are the tokens going to be burned - it's always from their address. Even though the private function _burn() allows to specify the burner, the user is not allowed to specify it.
Source code:
/**
* #dev See {IERC721-safeTransferFrom}.
*/
function safeTransferFrom(
address from,
address to,
uint256 tokenId
) public virtual override {
safeTransferFrom(from, to, tokenId, "");
}
/**
* #dev See {IERC721-safeTransferFrom}.
*/
function safeTransferFrom(
address from,
address to,
uint256 tokenId,
bytes memory data
) public virtual override {
require(_isApprovedOrOwner(_msgSender(), tokenId), "ERC721: caller is not token owner or approved");
_safeTransfer(from, to, tokenId, data);
}
Hi everyone.
While reading the Openzeppelin ERC-721 source code, I found that it defined two safeTransferFrom method with different implementations.
I am curious why it's made in this way. Could anyone help me with it?
Many thanks.
It follows the ERC-721 standard that also defines two functions with the same name - but with different input params. Generally in OOP, this is called function overloading.
As you can see in the OpenZeppelin implementation, when you call the function without the data param, it passes an empty value.
I can't speak for the authors of the standard, but to me it seems like a more developer friendly approach compared to having to explicitly pass the empty value, since Solidity doesn't allow specifying a default param value.
Best Practice Advice:
Is it acceptable to override the totalSupply() of an ERCToken if you're using a different variable to hold some of the supply and not holding all the tokens in the totalSupply variable directly?
example:
...
uint _extraSupplyForGivingAway = 1e27; //decimal 1e18 * 1M just an example
function totalSupply() public view override returns(uint totalSupply){
totalSupply = super.totalSupply() + _extraSupplyForGivingAway);
return (totalSupply);
}
The total value of the contract is not only the _totalSupply, it's also the _totalSupply and the extra tokens.
Question: Does the community and/or exchanges find this acceptable or not?
There are two different issues here. One is, are you conforming to the EIP-20 (ERC-20) standard in a way that will be understood by the community-at-large? Another is, is this a reasonable implementation for your business logic?
The latter issue is out-of-scope here since you haven't really provided enough information. So I will address the first, which is what I believe you wanted to know.
The reason I spell this out is because you fixate on implementation details but ERC20 is an interface standard, so it doesn't by-and-large dictate how things ought to be implemented.
In the case of totalSupply, all the standard says is:
totalSupply
Returns the total token supply.
function totalSupply() public view returns (uint256)
If it's not clear what that means, the EIP does link to the OpenZeppelin contract as an example implementation, which has:
/**
* #dev Total number of tokens in existence
*/
function totalSupply() public view returns (uint256) {
return _totalSupply;
}
So as long as the total number of minted tokens is returned, you are fine. It doesn't matter if you internally have it computed as the sum of two other private variables. I do have some lingering doubts about your implementation from what you wrote, but as I said, that's out-of-scope :)
I hesitate to add this and possibly muddy the waters but.. "tokens in existence" is somewhat ambiguous. Sometimes people have their burn function do an actual transfer to the zero address (not just the event), effectively removing them from the circulating supply, and adjust the total supply accordingly. So their totalSupply will then return the number of tokens held only by non-zero addresses. Block explorers may or may not account for this. I would avoid doing this unless you absolutely know what you're doing.
I have an interesting use case that I can't seem to solve.
Problem: Tokens get X points per day. I want to freeze ERC721 tokens (they have IDs) for a certain period of time. During that time, they get 0 points per day.
I have the following to calculate points:
uint32 public constant SECONDS_IN_DAY = 1 days;
struct UserInfo {
uint256 itemCount;
uint256 pendingPoints;
uint256 lastUpdate;
}
mapping(address => UserInfo) public userInfo;
function pending(address account) public view returns (uint256) {
uint256 pendingPoints = userInfo[account].pendingPoints + (((block.timestamp - userInfo[account].lastUpdate) / SECONDS_IN_DAY) * (userInfo[account].itemCount));
return pendingPoints;
}
modifier updatePoints(address account) {
userInfo[account].pendingPoints = pending(account);
userInfo[account].lastUpdate = block.timestamp;
_;
}
The problem I can't figure out:
How do I store when each token is freezed for how long so that I can accurately determine when to reduce points in the pending function.
Do this in a gas efficient way.
I've thought about adding a mapping that holds a timestamp and the amount per day that gets reduced in UserInfo struct but then I would have no way to retrieve this information.
mapping(uint256 => uint256) perDayPointDeductions;
What can I try next?
Maybe something like snapshots or/and a chainlink keeper could be a reliable solution to this problem, and maybe you could check how some staking mechanism works since the problem you are facing is similar staking
I'm not sure if I understand the issue well, but in this case I would store the data offchain e.g. tools https://thegraph.com/en/
Where I would emit events in functions that would just store my data on TheGraph. From there I can then read this data and determine what happens to the tokens and when they will be frozen. (Gas Efficient)
But if you need to do this directly in the contract ( hence avoiding the offchain). I would go for https://docs.chain.link/docs/chainlink-keepers/introduction/
I wanted to create a smart contract that only interacts with a specific NFT. I know there is a "tokenID" attribute I don't think this is unique. Cronoscan shows multiple collections that have the same tokenIDs. Does anyone know if smart contracts can filter based on a contract address? I'd like to accomplish this with as little gas as possible.
Sorry if this is a basic question but I've Googled and searched this message board for the answer but was not able to find on other that someone trying to sell their service.
I Google and search Stack Overflow but could not find an answer.
Yes, each contract will have their own set of ids and therefore they are not unique between contracts only unique for each contract.
This checks if the code size for the address is > 0. This will have to be implemented on a new contract or you will have to find an existing contract with this functionality to view/execute it
function isContract(address addressValue) public view returns (bool) {
uint size;
assembly { size := extcodesize(addressValue) }
return size > 0;
}
Also notice this is a view function and for that reason wont cost any gas to execute.
In regards to someone selling it as a service, you can get it yourself by just deploying this contract on whatever main net you want (by the sounds of it Cronos).
'// SPDX-License-Identifier: MIT
pragma solidity 0.8.7;
contract ContractIdentifier{
function isContract(address addressValue) public view returns (bool) {
uint size;
assembly { size := extcodesize(addressValue) }
return size > 0;
}
}