Access nested mapping solidity - solidity

NOTE: I asked this question a few days ago while I had solidity 0.7.0. Now I am using solc 0.8.0. With the new ABI V2 encoding, this should be possible. However, I still got stuck.
NOTE2: I know I can write a getter to get a specific review. However, I am aware of gas costs and I need to get all the ratings in one go to compute averages, so I don't think it is feasible.
Suppose I have this data structure layout:
struct ReviewStruct {
string rating;
...
}
struct Restaurant {
...
uint reviewCount;
mapping(uint => ReviewStruct) reviews;
}
uint public restaurantCount = 0;
mapping(uint => Restaurant) public restaurants;
Then, when I'm trying to access stuff in my JS app, it works, but not if I'm trying to access an actual review:
const restaurantCount = await review.methods.restaurantCount().call() // works
const restaurant = await review.methods.restaurants(2).call() // works
const reviewObj = await review.methods.restaurants(2).reviews(0).call() // throws an error
How do I access a mapping that is inside of a mapping (both are related to structs)?

Your reviews mapping is defined inside your Restaurant struct, thats why you can't access it, you need to access a restartant first to access its reviews.
For example:
const restaurantCount = await review.methods.restaurantCount().call()
const restaurant = await review.methods.restaurants(2).call()
for(let i = 0; i > restaurant.reviewCount; i++){
let reviewObj = restaurant.reviews[i];
console.log(reviewObj);
};

Related

Complex function using Parse Server Cloud Code (looping and creating records)

After a night of trial and error I have decided on a much simpler way to explain my issue. Again, I have no JS experience, so I don't really know what I am doing.
I have 5 classes:
game - holds information about my games
classification - holds information about the user classes available in games
game_classifications - creates a one game to many classifications relationship (makes a game have mulitple classes)
mission - holds my mission information
mission_class - creates a one to many relationship between a mission and the classes available for that mission
Using Cloud Code, I want to provide two inputs through my Rest API being missionObjectId and gameObjectId.
The actual steps I need the code to perform are:
Get the two inputs provided {"missionObjectId":"VALUE","gameObjectId":"VALUE"}
Search the game_classifications class for all records where game = gameObjectID
For each returned record, create a new record in mission_class with the following information:
mission_id = missionObjectId
classification = result.classification
Here is an image of the tables:
And here is how I have tried to achieve this:
Parse.Cloud.define("activateMission", async (request) => {
Parse.Cloud.useMasterKey();
const query = new Parse.query('game_classifications');
query.equalTo("gameObjectId", request.params.gameObjectId);
for (let i = 0; i < query.length; i ++) {
const mission_classification = Parse.Object.extend("mission_class");
const missionClass = new mission_classification();
missionClass.set("mission_id", request.params.missionObjectId);
missionClass.set("classification_id", query[i].classificationObjectId);
return missionClass.save();
}
});
Does anyone have any advice or input that might help me achieve this goal?
The current error I am getting is:
Parse.query is not a constructor
Thank you all in advance!
Some problems on your current code:
Parse.Cloud.useMasterKey() does not exist for quite a long time. Use useMasterKey option instead.
It's Parse.Query and not Parse.query.
You need to run query.findAll() command and iterate over it (and not over query).
For performance, move Parse.Object.extend calls to the beginning of the file.
To access the field of an object, use obj.get('fieldName') and not obj.fieldName.
If you return the save operation, it will save the first object, return, and not save the others.
So, the code needs to be something like this:
const mission_classification = Parse.Object.extend("mission_class");
const game = Parse.Object.extend("game");
Parse.Cloud.define("activateMission", async (request) => {
const query = new Parse.Query('game_classifications');
const gameObj = new game();
gameObj.id = request.params.gameObjectId;
query.equalTo("gameObjectId", gameObj);
const queryResults = await query.findAll({useMasterKey: true});
for (let i = 0; i < queryResults.length; i++) {
const missionClass = new mission_classification();
missionClass.set("mission_id", request.params.missionObjectId);
missionClass.set("classification_id", queryResults[i].get('classificationObjectId'));
await missionClass.save(null, { useMasterKey: true });
}
});

Calling specific methods on a Solana Solidity program

I've built a simple smart contract to run on the Ethereum blockchain and I'm trying to replicate some of it's behavior on Solana. After making some slight changes I've managed to compile the program with Solang targeting Solana, but I'm not sure how to go about calling the methods; there doesn't seem to be a great wealth of documentation or examples on this. For example, if my program were written as follows:
contract Example {
function foo(...) { ... }
function bar(...) { ... }
}
How would I specify a call to foo vs a call to bar? Furthermore, how would I encode the arguments for these method calls?
Currently my approach is to use the #solana/buffer-layout library to encode my arguments as a struct, starting with lo.u8('instruction') to specify the method call (in the example case, I assume 0 would refer to foo and 1 would refer to bar). I've taken this approach based on looking at the source code for #solana/spl-token (specifically this file and it's dependencies) but I'm not sure if it will work for a program compiled using Solang, and the buffer layout encoding has been throwing an unexpected error as well:
TypeError: Blob.encode requires (length 32) Uint8Array as src
The code throwing this error is as follows:
const method = lo.struct([
lo.u8('instruction'),
lo.seq(
lo.blob(32),
lo.greedy(lo.blob(32).span),
'publicKeys',
),
])
const data = Buffer.alloc(64); // Using method.span here results in an error, as method.span == -1
method.encode(
{
instruction: 0,
publicKeys: [firstPublicKey, secondPublicKey],
},
data,
);
While this type error seems obvious, it doesn't line up with the sample code in the solana-labs/solana-program-library repository. I'm pretty sure this problem has to do with my use of lo.seq() but I'm not sure what the problem is.
Is my approach to this correct besides this type error, or is my approach fundamentally wrong? How can I call the intended method with encoded arguments? Thank you for any help.
There's a better library for you to use, #solana/solidity, which has a Contract class to encapsulate calls on the contract.
For example, in your case, you could do:
const { Connection, LAMPORTS_PER_SOL, Keypair } = require('#solana/web3.js');
const { Contract, Program } = require('#solana/solidity');
const { readFileSync } = require('fs');
const EXAMPLE_ABI = JSON.parse(readFileSync('./example.abi', 'utf8'));
const PROGRAM_SO = readFileSync('./example.so');
(async function () {
console.log('Connecting to your local Solana node ...');
const connection = new Connection('http://localhost:8899', 'confirmed');
const payer = Keypair.generate();
console.log('Airdropping SOL to a new wallet ...');
const signature = await connection.requestAirdrop(payer.publicKey, LAMPORTS_PER_SOL);
await connection.confirmTransaction(signature, 'confirmed');
const program = Keypair.generate();
const storage = Keypair.generate();
const contract = new Contract(connection, program.publicKey, storage.publicKey, EXAMPLE_ABI, payer);
await contract.load(program, PROGRAM_SO);
console.log('Program deployment finished, deploying the example contract ...');
await contract.deploy('example', [true], program, storage);
const res = await contract.functions.foo();
console.log('foo: ' + res.result);
const res2 = await contract.functions.bar();
console.log('bar: ' + res2.result);
})();
Example adapted from https://github.com/hyperledger-labs/solang#build-for-solana
More information about the package at https://www.npmjs.com/package/#solana/solidity

How can I use the same value as written in the Json during the same test execution in the testcafe

I have been trying to use the value from the JSON that I have got added successfully using fs.write() function,
There are two test cases in the same fixture, one to create an ID and 2nd to use that id. I can wrote the id successfully in the json file using fs.write() function and trying to use that id using importing json file like var myid=require('../../resources/id.json')
The json file storing correct id of the current execution but I get the id of first test execution in 2nd execution.
For example, id:1234 is stored during first test execution and id:4567 is stored in 2nd test execution. During 2nd test execution I need the id:4567 but I get 1234 this is weird, isn't it?
I use it like
t.typeText(ele, myid.orid)
my json file contains only id like {"orid":"4567"}
I am new to Javascript and Testcafe any help would really be appreciated
Write File class
const fs = require('fs')
const baseClass =require('../component/base')
class WriteIntoFile{
constructor(orderID){
const OID = {
orderid: orderID
}
const jsonString = JSON.stringify(OID)
fs.writeFile(`resources\id.json`, jsonString, err => {
if (err) {
console.log('Error writing file', err)
} else {
console.log('Successfully wrote file')
}
})
}
}
export default WriteIntoFile
I created 2 different classes in order to separate create & update operations and call the functions of create & update order in single fixture in test file
Create Order class
class CreateOrder{
----
----
----
async createNewOrder(){
//get text of created ordder and saved order id in to the json file
-----
-----
-----
const orId= await baseclass.getOrderId();
new WriteIntoFile(orId)
console.log(orId)
-----
-----
-----
}
}export default CreateOrder
Update Order class
var id=require('../../resources/id.json')
class UpdateOrder{
async searchOrderToUpdate(){
await t
***//Here, I get old order id that was saved during previous execution***
.typeText(baseClass.searchBox, id.orderid)
.wait(2500)
.click(baseClass.searchIcon)
.doubleClick(baseClass.orderAGgrid)
console.log(id.ordderid)
----
----
async updateOrder(){
this.searchOrderToUpdate()
.typeText(baseClass.phNo, '1234567890')
.click(baseClass.saveBtn)
}
}export default UpdateOrder
Test file
const newOrder = new CreateOrder();
const update = new UpdateOrder();
const role = Role(`siteurl`, async t => {
await t
login('id')
await t
.wait(1500)
},{preserveUrl:true})
test('Should be able to create an Order', async t=>{
await newOrder.createNewOrder();
});
test('Should be able to update an order', async t=>{
await update.updateOrder();
});
I'll reply to this, but you probably won't be happy with my answer, because I wouldn't go down this same path as you proposed in your code.
I can see a couple of problems. Some of them might not be problems right now, but in a month, you could struggle with this.
1/ You are creating separate test cases that are dependent on each other.
This is a problem because of these reasons:
what if Should be able to create an Order doesn't run? or what if it fails? then Should be able to update an order fails as well, and this information is useless, because it wasn't the update operation that failed, but the fact that you didn't meet all preconditions for the test case
how do you make sure Should be able to create an Order always runs before hould be able to update an order? There's no way! You can do it like this when one comes before the other and I think it will work, but in some time you decide to move one test somewhere else and you are in trouble and you'll spend hours debugging it. You have prepared a trap for yourself. I wrote this answer on this very topic, you can read it.
you can't run the tests in parallel
when I read your test file, there's no visible hint that the tests are dependent on each other. Therefore as a stranger to your code, I could easily mess things up because I have no way of knowing about it without going deeper in the code. This is a big trap for anyone who might come to your code after you. Don't do this to your colleagues.
2/ Working with files when all you need to do is pass a value around is too cumbersome.
I really don't see a reason why you need to same the id into a file. A slightly better approach (still violating 1/) could be:
const newOrder = new CreateOrder();
const update = new UpdateOrder();
// use a variable to pass the orderId around
// it's also visible that the tests are dependent on each other
let orderId = undefined;
const role = Role(`siteurl`, async t => {
// some steps, I omit this for better readability
}, {preserveUrl: true})
test('Should be able to create an Order', async t=>{
orderId = await newOrder.createNewOrder();
});
test('Should be able to update an order', async t=>{
await update.updateOrder(orderId);
});
Doing it like this also slightly remedies what I wrote in 1/, that is that it's not visible at first sight that the tests are dependent on each other. Now, this is a bit improved.
Some other approaches how you can pass data around are mentioned here and here.
Perhaps even a better approach is to use t.fixtureCtx object:
const newOrder = new CreateOrder();
const update = new UpdateOrder();
const role = Role(`siteurl`, async t => {
// some steps, I omit this for better readability
}, {preserveUrl:true})
test('Should be able to create an Order', async t=>{
t.fixtureCtx.orderId = await newOrder.createNewOrder();
});
test('Should be able to update an order', async t=>{
await update.updateOrder(t.fixtureCtx.orderId);
});
Again, I can at least see the tests are dependent on each other. That's already a big victory.
Now back to your question:
During 2nd test execution I need the id:4567 but I get 1234 this is weird, isn't it?
No, it's not weird. You required the file:
var id = require('../../resources/id.json')
and so it's loaded once and if you write into the file later, you won't read the new content unless you read the file again. require() is a function in Node to load modules, and it makes sense to load them once.
This demonstrates the problem:
const idFile = require('./id.json');
const fs = require('fs');
console.log(idFile); // { id: 5 }
const newId = {
'id': 7
};
fs.writeFileSync('id.json', JSON.stringify(newId));
// it's been loaded once, you won't get any other value here
console.log(idFile); // { id: 5 }
What you can do to solve the problem?
You can use fs.readFileSync():
const idFile = require('./id.json');
const fs = require('fs');
console.log(idFile); // { id: 5 }
const newId = {
'id': 7
};
fs.writeFileSync('id.json', JSON.stringify(newId));
// you need to read the file again and parse its content
const newContent = JSON.parse(fs.readFileSync('id.json'));
console.log(newContent); // { id: 7 }
And this is what I warned you against in the comment section. That this is too cumbersome, inefficient, because you write to a file and then read from the file just to get one value.
What you created is not very readable either:
const fs = require('fs')
const baseClass =require('../component/base')
class WriteIntoFile{
constructor(orderID){
const OID = {
orderid: orderID
}
const jsonString = JSON.stringify(OID)
fs.writeFile(`resources\id.json`, jsonString, err => {
if (err) {
console.log('Error writing file', err)
} else {
console.log('Successfully wrote file')
}
})
}
}
export default WriteIntoFile
All these operations for writing into a file are in a constructor, but a constructor is not the best place for all this. Ideally you have only variable assignments in it. I also don't see much reason for why you need to create a new class when you are doing only two operations that can easily fit on one line of code:
fs.writeFileSync('orderId.json', JSON.stringify({ orderid: orderId }));
Keep it as simple as possible. it's more readable like so than having to go to a separate file with the class and decypher what it does there.

Create map of existing objects in mint function for a smart contract

I'm really new to solidity and there is still alot I dont fully get. I have created this smart contract. I am getting an error while performing a test stating that the id to be set by the push cant be performed due to the following:
Error: Different number of components on the left hand side (1) than on the right hand side (0).
uint _id = arts.push(_art);//create ids
^------------------------^
I understand that the push receives only one atribute and should be able to stablish the index to the id variable. Nontheless this bugg occurs, I'm not sure if its the version or something else. I'm currently using truffle for the tests with version: "^0.6.0". I would really appreciate your help. Thanks in advance!
Here's my code:
pragma solidity >=0.4.21 <0.7.0;
import "#openzeppelin/contracts/token/ERC721/ERC721.sol";
contract Art is ERC721{
string[] public arts;
mapping(string => bool) _artExists;//similar to json or hash
constructor() ERC721("Art", "DATA") public {
}
//E.G color = "#FFFFFF"
//create art restrict in the future to mentors
function mint(string memory _art) public{
//Require unique Art
require(!_artExists[_art]);
uint _id = arts.push(_art);//create ids
//address
_mint(msg.sender, _id);
_artExists[_art] = true;
//Art - track it & add it
//Call the mint function
//Art - track it
}
}
//mint function
push()
returns a reference to the new added element, not the index.
Exemple :
arts.push() = "whatever you want";
Use the length attribute to get the index or the new element.

API Request Pagination

I am making a simple API request to Github to get all the repositories. The problem is that Github has a limitation and the max that it can send is 100 per request. There are users that have more than 100 repositories and I don't know how to access it or how to make pagination.
I am making GET request with Axios like this:
https://api.github.com/users/<AccountName>/repos?per_page=100
I can also put page number like so
https://api.github.com/users/<AccountName>/repos?page=3&per_page=100
But how do I make this work in app without making 10 API requests? I wouldn't even know how many requests I should make because I don't know what is the number that gets returned, does somebody have 100 or 1000 repos? I would like for everything to be returned and saved in array, for example.
EDIT:
Example: I am passing in accountName
var config = {
headers: {'Authorization': `token ${ACCESS_TOKEN}`}
}
const REQUEST: string = 'https://api.github.com/users/'
const apiCall = {
getData: async function (accountName) {
const encodedAccountName = encodeURIComponent(accountName)
const requestUrl = `${REQUEST}${encodedAccountName}`
const user = await axios.get(requestUrl, config)
// This return user and inside of user there is a link for fetching repos
const repo = await axios.get(`${user.data.repos_url}?per_page=100`, config)
...
You can get the repo count by requesting from the user account URL first. For example here is mine:
https://api.github.com/users/erikh2000
The response there includes a "public_repos" value. Bam! That's the magic number you want.
You next need to make multiple fetches if the repo count is over 100. I know you didn't want to, but hey... can't blame web services for trying to conserve their bandwidth. The good news is you can probably put them in a Promise.all() block and have them all fetch together and return at once. So code like...
const fetchAllTheRepos = (userName, repoCount) => {
const MAX_PER_PAGE = 100;
const baseUrl = 'https://api.github.com/users/' + userName +
'/repos?per_page=' + MAX_PER_PAGE;
//Start fetching every page of repos.
const fetchPromises = [], pageCount = Math.ceil(repoCount /
MAX_PER_PAGE);
for (let pageI = 1; pageI <= pageCount; ++pageI) {
const fetchPagePromise = fetch(baseUrl + '&page=' + pageI);
fetchPromises.push(fetchPagePromise);
}
//This promise resolves after all the fetching is done.
return Promise.all(fetchPromises)
.then((responses) => {
//Parse all the responses to JSON.
return Promise.all( responses.map((response) => response.json()) );
}).then((results) => {
//Copy the results into one big array that has all the friggin repos.
let repos = [];
results.forEach((result) => {
repos = repos.concat(result);
});
return repos;
});
};
//I left out the code to get the repo count, but that's pretty easy.
fetchAllTheRepos('erikh2000', 7).then((repos) => {
console.log(repos.length);
});
Simultaneously fetching all the pages may end up being more than Github wants to let you do at once for those accounts with lots of repos. I would put some "good citizen" limit on the number of repos you'll try to get at once, e.g. 1000. And then see if api.github.com agrees with your definition of a good citizen by watching for HTTP error responses. You can get into throttling solutions if needed, but probably a "grab it all at once" approach like above works fine.
On the other hand, if you are spidering through multiple accounts in one session, then maybe design the throttling in from the beginning just to you know... be nice. For that, look at a queue/worker pattern.