Is there any way to catch an event fired within another contract which is called by low level 'call' opcode from main contract in solidity - solidity

I have a Multisig contract that when reaches the minimum quorum, it can execute a low level call Transaction which may be held on another contract.
function _execute(Transaction storage transaction) internal {
//some code
// solhint-disable-next-line
(bool success, ) = transaction.target.call{value: transaction.value}(callData); // FIRES AN EVENT IN OTHER CONTRACT
if (success) {
emit TransactionExecuted( // FIRES SECOND
//some code
);
} else {
emit TransactionFailed(
//some code
);
//some code
}
}
My execute function fires an event after execution of the Transaction (call) whether it was successful or not, in the meanwhile if call function request has an event to fire, I can catch the event fired by the contract, but event parameters are not there, The second contract which is called by _execute() is written as follows:
function _addMember(
address memberAddress,
bytes32 memberName,
Membership _membership
)
internal
{
//some code
// Fire an event
emit MembershipChanged(memberAddress, true, _membership); // FIRES FIRST
}
The following is the test written in typescript, I can get the event fired on called contract, but there is no data in it
it("should contain two events from previous transaction, adding a new core member and running a Transaction by multisig", async () => {
//r is the receipt of the caller (multisig) contract
expect(r.events!.length).to.be.eq(2); // MembershipChanged, TransactionExecuted
//NOTE: r.events![0].address === memberReg.address // memReg is the callee contract
/*THE FOLLOWING DOESN'T CONTAIN EVENT DATA NOR TOPICS OF memReg CONTRACT*/
expect(r.events![0].event).to.be.eq("MembershipChanged"); //faild
expect(r.events![0].args!.member).to.be.eq(coreCandidateAddr) //faild
expect(r.events![0].args!.isMember).to.be.true; //fails
expect(r.events![0].args!.membership).to.be.eq(Membership.Core); //faild
/* THE FOLLOWING WORKS WELL */
expect(r.events![1].event).to.be.eq("TransactionExecuted"); //passed
//some code
})
I guess it would be possible to catch those events in production easily by listening to that deployed contract, but I don't know how to do this in test environment

Ok, Thanks to #bbbbbbbbb here is the simple solution, it should listen on before the transaction call
memberReg.once("MembershipChanged", (member, isMember, membership, event) => {
expect(event.event).to.be.eq("MembershipChanged"); //passed
expect(member).to.be.eq(coreCandidateAddr); //passed
expect(isMember).to.be.true; //passed
expect(membership).to.be.eq(Membership.Core); //passed
});
r = await awaitTx(multisig.execute(id)); // execution happens here

Your functions _execute and _addMember are marked as internal, it should be mark as external or public to be called in testing environment, or create a public function that call it.
If you are using hardhat the easy way is using the testing pattern for events suggested by hardhat documentation, this also works for low level calls.
const tx = await YourContract.execute(foo);
//Check that the event fired by a low level call is correct
//withArgs receive the expected values for the event
expect(tx).to.emit(YourContract, "MembershipChanged").withArgs(foo, bar...);
For more info check the implementation for hardhat docs: https://hardhat.org/tutorial/testing-contracts#full-coverage

Related

NestJs/ClientProxy: Manual listen to events without decorators

Is there a manual way to subscribe to a event message pattern without using decorators on a function?
In a service, I send a message to a different microservice.
The call return nothing just void but asynchronously triggers an event, which i want to listen to in the service right afterwards
Example given, would maybe work like this
// An event will be triggered later, after first value
await lastValueFrom(
this.rabbitmqClient.send(RmqPatterns.DO_STUFF, payload),
);
// Now listening for that async event
this.rabbitmqClient.listen(RmqPatterns.DID_STUFF, async msg => {
console.log(`Received message: ${msg.content.toString()}`);
});

Testing ChainlinkClient callbacks - how to bypass recordChainlinkFulfillment?

I've got a Chainlink client contract which makes a DirectRequest to an oracle. The oracle does its thing and then returns the answer via the typical callback selector passed in via the ChainlinkRequest. It all works well, but I'd like to write some tests that test the callback implementation
My client contract is as follows:
contract PriceFeed is Ownable, ChainlinkClient {
function updatePrice() onlyOwner returns (bytes32 requestId) {
// makes Chainlink request specifying callback via this.requestCallback.selector
}
function requestCallback(bytes32 _requestId, uint256 _newPrice) public
recordChainlinkFulfillment(_requestId) {
price = _newPrice;
}
}
The problem arises when the test code calls requestCallback(...) and the code hits the recordChainlinkFulfillment(...) modifier. The ChainlinkClient complains that the requestId being passed in by the test below isn't in the underling private pendingRequests mapping maintained by the ChainlinkClient.
The simplified version of ChainlinkClient looks like this:
contract ChainlinkClient {
mapping(bytes32 => address) private pendingRequests;
modifier recordChainlinkFulfillment(bytes32 _requestId) {
require(msg.sender == pendingRequests[_requestId], "Source must be the oracle of the request");
delete pendingRequests[_requestId];
emit ChainlinkFulfilled(_requestId);
_;
}
}
My Foundry/Solidity test is as follows:
contract PriceFeedTest is Test {
function testInitialCallback() public {
priceFeed.requestCallback("abc123", 1000000); // fails on this line
assertEq(1000000, priceFeed.price(), "Expecting price to be 1000000");
}
}
The code fails on first line of the testInitialCallback() line with: Source must be the oracle of the request
How can I trick the ChainklinkClient into allowing my callback to get past the modifier check? AFAIK I can't access and pre-populate the private pendingRequests mapping. Is there another way?
I know that Foundry provides Cheatcodes to help in testing and there's a stdstorage cheatcode, but I'm not familiar on how to construct a call to stdstorage to override pendingRequests if thats even possible with a cheatcode.
contract PriceFeedTest is Test {
function testInitialCallback2() public {
stdstore
.target(address(priceFeed))
.sig("pendingRequests()")
.with_key("abc123")
.checked_write(address(this));
priceFeed.requestCallback("abc123", 1000000);
assertEq(1000000, priceFeed.price(), "Expecting price to be 1000000");
}
}
The above code throws the following error: No storage use detected for target
Any help would be greatly appreciated. Many thanks.
When you execute the updatePrice function in your test, you should be able to strip out the requestId from the transaction receipt event. Once you have that, you can then use it in your call to requestCallback. Check out this example unit test from the hardhat starter kit for an example of this

address(this).send(msg.value) returning false but ethers got transfered

Below is my function:
// Function
function deposit() payable external {
// if(!wallet_address.send(msg.value)){
// revert("doposit fail");
// }
bool isErr = address(this).send(msg.value);
console.log(isErr);
emit Deposit(msg.sender, msg.value, address(this).balance);
}
I use Remix IDE with solidity version 0.8.7 and my question is why send() returns false but the ethers got transferred. Is send() returns false when it success by default?
address(this).send(msg.value) effectively just creates an unnecessary internal transaction redirecting the value accepted by "this contract" to "this contract"
This internal transaction fails because because your contract does not implement the receive() nor fallback() special functions that are needed to accept ETH sent to your contract from send(), transfer(), call() in some cases, and generally any transaction (internal or main) that does not invoke any specific existing function. It does not fail the main transaction, just returns false from the send() method.
TLDR: The send() function is in this case redundant and you can safely remove it. Your contract is able to accept ETH from the deposit() function even without it.
It is low level function call, it can fail in after transfer step. If you don't check success variable, the compiler is warning you that the call could revert and you might carry on unaware that you ignored failure.
So you should check success variable to ensure transaction is success.
require(success, "ETH_TRANSFER_FAILED");

Empty Events Array on UI but works on Test | Need TokenId from Transaction

I'm working on an app that allows the user to create NFTs and list them on a market place. When I try to create the token on the UI using metamask, a createToken function is being called. The resolved promise of createToken is an object for which I expect an events key with 2 events (I'm able to confirm this by running npx hardhat test. However I don't actually see these events emitted to the UI... I need these events to get the tokenId. If someone knows an alternative way to get the tokenId I'm open to that as well.
createToken:
contract NFT is ERC721URIStorage {
using Counters for Counters.Counter;
Counters.Counter private _tokenIds;
address contractAddress;
constructor(address marketplaceAddress) ERC721("Metaverse", "METT") {
contractAddress = marketplaceAddress;
}
function createToken(string memory tokenURI) public returns (uint256) {
_tokenIds.increment();
uint256 newItemId = _tokenIds.current();
_mint(msg.sender, newItemId);
_setTokenURI(newItemId, tokenURI);
// Not emitting this event to the UI but it works in the test
setApprovalForAll(contractAddress, true);
return newItemId;
}
}
The function on the UI looks like below where createSale creates a listing on the marketplace:
async function createSale(url) {
const web3Modal = new Web3Modal();
const connection = await web3Modal.connect();
const provider = new ethers.providers.Web3Provider(connection);
const signer = provider.getSigner();
/* next, create the item */
let contract = new ethers.Contract(nftAddress, NFT.abi, signer);
let transaction = await contract.createToken(url);
let tx = await transaction.wait();
// Seeing a successful transaction
console.log("tx ==> ", tx);
let event = tx.events[0];
// Breaks here sine event[0] is `undefined`
let value = event.args[2];
let tokenId = value.toNumber();
const price = ethers.utils.parseUnits(formInput.price, "ether");
/* then list the item for sale on the marketplace */
contract = new ethers.Contract(nftMarketAddress, Market.abi, signer);
let listingPrice = await contract.getListingPrice();
listingPrice = listingPrice.toString();
transaction = await contract.createMarketItem(nftAddress, tokenId, price, {
value: listingPrice,
});
await transaction.wait();
router.push("/");
}
Below is a screenshot of the resolved promise with the empty events array:
The nftAddress is empty - it doesn't hold the NFT contract.
The blockNumber property has value 1, which means this transaction was mined in the first block. Hardhat network automines by default - creates a new block for each transaction. Which makes the attached transaction the first one on this network instance, and rules out any possible previous transactions deploying the contract (to this network instance).
When you send a valid transaction to an empty address, it passes through as there's no contract to revert it - but also there's no contract to emit event logs. So it results in an empty events array - just like on the attached screenshot.
I expect an events key with 2 events (I'm able to confirm this by running npx hardhat test
When you run a hardhat test, it creates a network instance, deploys the contracts, runs the test scripts, and then destroys this network instance.
Same goes for when you run npx hardhat node - it creates a new network instance, and when you stop running the node, it destroys its data (along with deployed contracts).
Hardhat network doesn't seem to have a way to save its state and load it later. (Anyone please correct me if I'm mistaken - I just couldn't find anything related in the docs). So you'll might have to redeploy the contract each time you run npx hardhat node. Or to use a different network that supports this feature (e.g. Ganache and its Workspaces) - you'll still be able to use the Hardhat library for the tests, it will just connect to this other network instead of the default Hardhat network.
I had this exact same issue. I solved mine by using the ropsten test network instead.

Have multiple calls wait on the same internal async task

(Note: this is an over-simplified scenario to demonstrate my coding issue.)
I have the following class interface:
public class CustomerService
{
Task<IEnumerable<Customer>> FindCustomersInArea(String areaName);
Task<Customer> GetCustomerByName(String name);
:
}
This is the client-side of a RESTful API which loads a list of Customer objects from the server then exposes methods that allows client code to consume and work against that list.
Both of these methods work against the internal list of Customers retrieved from the server as follows:
private Task<IEnumerable<Customer>> LoadCustomersAsync()
{
var tcs = new TaskCompletionSource<IEnumerable<Customer>>();
try
{
// GetAsync returns Task<HttpResponseMessage>
Client.GetAsync(uri).ContinueWith(task =>
{
if (task.IsCanceled)
{
tcs.SetCanceled();
}
else if (task.IsFaulted)
{
tcs.SetException(task.Exception);
}
else
{
// Convert HttpResponseMessage to desired return type
var response = task.Result;
var list = response.Content.ReadAs<IEnumerable<Customer>>();
tcs.SetResult(list);
}
});
}
catch (Exception ex)
{
tcs.SetException(ex);
}
}
The Client class is a custom version of the HttpClient class from the WCF Web API (now ASP.NET Web API) because I am working in Silverlight and they don't have an SL version of their client assemblies.
After all that background, here's my problem:
All of the methods in the CustomerService class use the list returned by the asynchronous LoadCustomersAsync method; therefore, any calls to these methods should wait (asynchronously) until the LoadCustomers method has returned and the appopriate logic executed on the returned list.
I also only want one call made from the client (in LoadCustomers) at a time. So, I need all of the calls to the public methods to wait on the same internal task.
To review, here's what I need to figure out how to accomplish:
Any call to FindCustomersInArea and GetCustomerByName should return a Task that waits for the LoadCustomersAsync method to complete. If LoadCustomersAsync has already returned (and the cached list still valid), then the method may continue immediately.
After LoadCustomersAsync returns, each method has additional logic required to convert the list into the desired return value for the method.
There must only ever be one active call to LoadCustomersAsync (of the GetAsync method within).
If the cached list expires, then subsequent calls will trigger a reload (via LoadCustomersAsync).
Let me know if you need further clarification, but I'm hoping this is a common enough use case that someone can help me work out the logic to get the client working as desired.
Disclaimer: I'm going to assume you're using a singleton instance of your HttpClient subclass. If that's not the case we need only modify slightly what I'm about to tell you.
Yes, this is totally doable. The mechanism we're going to rely on for subsequent calls to LoadCustomersAsync is that if you attach a continuation to a Task, even if that Task completed eons ago, you're continuation will be signaled "immediately" with the task's final state.
Instead of creating/returning a new TaskCompletionSource<T> (TCS) every time from the LoadCustomerAsync method, you would instead have a field on the class that represents the TCS. This will allow your instance to remember the TCS that last represented the call that represented a cache-miss. This TCS's state will be signaled exactly the same as your existing code. You'll add the knowledge of whether or not the data has expired as another field which, combined with whether the TCS is currently null or not, will be the trigger for whether or not you actually go out and load the data again.
Ok, enough talk, it'll probably make a lot more sense if you see it.
The Code
public class CustomerService
{
// Your cache timeout (using 15mins as example, can load from config or wherever)
private static readonly TimeSpan CustomersCacheTimeout = new TimeSpan(0, 15, 0);
// A lock object used to provide thread safety
private object loadCustomersLock = new object();
private TaskCompletionSource<IEnumerable<Customer>> loadCustomersTaskCompletionSource;
private DateTime loadCustomersLastCacheTime = DateTime.MinValue;
private Task<IEnumerable<Customer>> LoadCustomersAsync()
{
lock(this.loadCustomersLock)
{
bool needToLoadCustomers = this.loadCustomersTaskCompletionSource == null
||
(this.loadCustomersTaskCompletionSource.Task.IsFaulted || this.loadCustomersTaskCompletionSource.Task.IsCanceled)
||
DateTime.Now - this.loadCustomersLastCacheTime.Value > CustomersService.CustomersCacheTimeout;
if(needToLoadCustomers)
{
this.loadCustomersTaskCompletionSource = new TaskCompletionSource<IEnumerable<Customer>>();
try
{
// GetAsync returns Task<HttpResponseMessage>
Client.GetAsync(uri).ContinueWith(antecedent =>
{
if(antecedent.IsCanceled)
{
this.loadCustomersTaskCompletionSource.SetCanceled();
}
else if(antecedent.IsFaulted)
{
this.loadCustomersTaskCompletionSource.SetException(antecedent.Exception);
}
else
{
// Convert HttpResponseMessage to desired return type
var response = antecedent.Result;
var list = response.Content.ReadAs<IEnumerable<Customer>>();
this.loadCustomersTaskCompletionSource.SetResult(list);
// Record the last cache time
this.loadCustomersLastCacheTime = DateTime.Now;
}
});
}
catch(Exception ex)
{
this.loadCustomersTaskCompletionSource.SetException(ex);
}
}
}
}
return this.loadCustomersTaskCompletionSource.Task;
}
Scenarios where the customers aren't loaded:
If it's the first call, the TCS will be null so the TCS will be created and customers fetched.
If the previous call faulted or was canceled, a new TCS will be created and the customers fetched.
If the cache timeout has expired, a new TCS will be created and the customers fetched.
Scenarios where the customers are loading/loaded:
If the customers are in the process of loading, the existing TCS's Task will be returned and any continuations added to the task using ContinueWith will be executed once the TCS has been signaled.
If the customers are already loaded, the existing TCS's Task will be returned and any continuations added to the task using ContinueWith will be executed as soon as the scheduler sees fit.
NOTE: I used a coarse grained locking approach here and you could theoretically improve performance with a reader/writer implementation, but it would probably be a micro-optimization in your case.
I think you should change the way you call Client.GetAsync(uri). Do it roughly like this:
Lazy<Task> getAsyncLazy = new Lazy<Task>(() => Client.GetAsync(uri));
And in your LoadCustomersAsync method you write:
getAsyncLazy.Value.ContinueWith(task => ...
This will ensure that GetAsync only gets called once and that everyone interested in its result will receive the same task.