I am storing scores of a user in google cloud firestore as each score a new document in the collection named as "points".
collection name: points
document1: {id:111,userid:345,value:50}
document2:{id:222,userid:345,value:70}
document3:{id:333,userid:345,value:30}
document1:{id:444,userid:345,value:100}
I want to sum all values in value field.
I have google many times but found nothing. Is there any alternative for sum() or any other way to implement recording scores of a user?
There are no built-in aggregation operators in Cloud Firestore.
The naïve solution is to load the data in the client and sum it there, but that means that you (and your users) incur the cost of loading all documents for each time they need to show the sum.
A better way is to keep a so-called running total in your database, which you update whenever a document is written to (added, modified, removed) to the "points" collection. For this you have two options: do it from the client, or do it from a trusted environment (such as Cloud Functions). The Firestore documentation on aggregation queries describes both options and shows sample code.
Use a cloud function which will make a url for you.
Example:
import { Request, Response } from 'express'
import * as admin from 'firebase-admin'
import * as functions from 'firebase-functions'
export const getUsersCount = functions.runWith({ memory: '2GB', timeoutSeconds: 60 }).https.onRequest(async (req: Request, res: Response) => {
const allUsers = await admin
.firestore()
.collection('users')
.get()
const numberOfUsers = allUsers.size;
res.status(200).json({
allTimeUsers: numberOfUsers,
})
return true;
})
Then just do Firebase deploy --only functions:getUsersCount
The logs will print out the url for you. The url might take awhile to load if it's a big app.
You can either use forEach or iterate in a for loop. This answer on stack overflow could help.
Here's an example from the same:
for (var i in querySnapshot.docs) {
const doc = querySnapshot.docs[i]
//do what you want to here
}
---OR---
you can use forEach like this
const collRef = firebase.firestore().collection('todos');
const query = collRef.orderBy('position');
const items = query.get()
.then((snapshot) => {
let newCount = 0;
snapshot.forEach((doc) => {
const docRef = collRef.doc(doc.id);
docRef.update({ position: newCount });
newCount += 1;
});
});
Related
Here is a reproducable stackblitz -
https://stackblitz.com/edit/nuxt-starter-jlzzah?file=components/users.vue
What's wrong? -
My code fetches 15 items, and with the bottom scroll event it should fetch another 15 different items but it just fetches same items again.
I've followed this bottom video for this implementation, it's okay in the video but not okay in my stackblitz code:
https://www.youtube.com/watch?v=WRnoQdIU-uE&t=3s&ab_channel=JohnKomarnicki
The only difference with this video is that he's using axios while i use useFetch of nuxt 3.
It's not really a cache issue. useFetch is "freezing" the API URL, the changes you make to the string directly will not be reliably reflected. If you want to add parameters to your API URL, use the query option of useFetch. This option is reactive, so you can use refs and the query will update with the refs. Alternatively, you can use the provided refresh() method
const limit = ref(10)
const skip = ref(20)
const { data: users, refresh: refreshUsers } = await useFetch(
'https://dummyjson.com/users',
{
query:{
limit,
skip
}
}
);
//use the data object directly to access the result
console.log(users.value)
//if you want to update users with different params later, simply change the ref and the query will update
limit.value = 23
//use refresh to manually refresh the query
refreshUsers()
This results in a first API call http://127.0.0.1:8000/api/tasks?limit=10&skip=20 and then a second with the updated values http://127.0.0.1:8000/api/tasks?limit=23&skip=20
You can leave the cache alone, as it is just a workaround, and will not work reliably.
[Updated] The useFetch() documentation is now updated as described below.
The query option is not well documented yet, as discussed in this nuxt issue. I've created a pull request on nuxt/framework to have it reflected in the documentation. Please see a full explanation below:
Using the query option, you can add search parameters to your query. This option is extended from unjs/ohmyfetch and is using ufo to create the URL. Objects are automatically stringified.
const param1 = ref('value1')
const { data, pending, error, refresh } = await useFetch('https://api.nuxtjs.dev/mountains',{
query: { param1, param2: 'value2' }
})
This results in https://api.nuxtjs.dev/mountains?param1=value1¶m2=value2
Nuxt3's useFetch uses caching by default. Use initialCache: false option to disable it:
const getUsers = async (limit, skip) => {
const { data: users } = await useFetch(
`https://dummyjson.com/users?limit=${limit}&skip=${skip}`,
{
initialCache: false,
}
);
//returning fetched value
return users.value.users;
};
But you probably should use plain $fetch instead of useFetch in this scenario to avoid caching:
const getUsers = async (limit, skip) => {
const { users } = await $fetch(
`https://dummyjson.com/users?limit=${limit}&skip=${skip}`
);
//returning fetched value
return users;
};
I am trying to use OpenSea API and I noticed that I need to set a limit before retrieving assets
https://docs.opensea.io/reference/getting-assets
I figured I can use the offset to navigate through all the items, even though that's tedious. But the problem is offset itself has a limit, so are assets beyond the max offset inaccessible ?
I read that you that the API is "rate-limited" without an API key, so I assume that related to the number of requests you can make in a certain time period, am I correct about that? Or does it lift the limit of returned assets ? The documentation isn't clear about that https://docs.opensea.io/reference/api-overview
What can I do to navigate through all the assets ?
May be late answering this one, but I had a similar problem. You can only access a limited number (50) assets if using the API.
Using the API referenced on the page you linked to, you could do a for loop to grab assets of a collection in a range. For example, using Python:
import requests
def get_asset(collection_address:str, asset_id:str) ->str:
url = "https://api.opensea.io/api/v1/assets?token_ids="+asset_id+"&asset_contract_address="+collection_address+"&order_direction=desc&offset=0&limit=20"
response = requests.request("GET", url)
asset_details = response.text
return asset_details
#using the Dogepound collection with address 0x73883743dd9894bd2d43e975465b50df8d3af3b2
collection_address = '0x73883743dd9894bd2d43e975465b50df8d3af3b2'
asset_ids = [i for i in range(10)]
assets = [get_asset(collection_address, str(i)) for i in asset_ids]
print(assets)
For me, I actually used Typescript because that's what opensea use for their SDK (https://github.com/ProjectOpenSea/opensea-js). It's a bit more versatile and allows you to automate making offers, purchases and sales on assets. Anyway here's how you can get all of those assets in Typescript (you may need a few more dependencies than those referenced below):
import * as Web3 from 'web3'
import { OpenSeaPort, Network } from 'opensea-js'
// This example provider won't let you make transactions, only read-only calls:
const provider = new Web3.providers.HttpProvider('https://mainnet.infura.io')
const seaport = new OpenSeaPort(provider, {
networkName: Network.Main
})
async function getAssets(seaport: OpenSeaPort, collectionAddress: string, tokenIDRange:number) {
let assets:Array<any> = []
for (let i=0; i<tokenIDRange; i++) {
try {
let results = await client.api.getAsset({'collectionAddress':collectionAddress, 'tokenId': i,})
assets = [...assets, results ]
} catch (err) {
console.log(err)
}
}
return Promise.all(assets)
}
(async () => {
const seaport = connectToOpenSea();
const assets = await getAssets(seaport, collectionAddress, 10);
//Do something with assets
})();
The final thing to be aware of is that their API is rate limited, like you said. So you can only make a certain number of calls to their API within a time frame before you get a pesky 429 error. So either find a way of bypassing rate limits or put a timer on your requests.
I am working with Nuxt and Vue, with MySQL database, all of which are new to me. I am transitioning out of WebMatrix, where I had a single Admin page for multiple tables, with dropdowns for selecting a particular option. On this page, I could elect to add, edit or delete the selected option, say a composer or music piece. Here is some code for just 2 of the tables (gets a runtime error of module build failed):
<script>
export default {
async asyncData(context) {
let [{arrangers}, {composers}] = await Promise.all([
context.$axios.get(`/api/arrangers`),
context.$axios.get(`/api/composers`),
])
const {arrangers} = await context.$axios.get('/api/arrangers')
const {composers} = await context.$axios.get('/api/composers')
return { arrangers, composers }
},
}
</script>
You do have the same variable name for both the input (left part of Promise.all) and as the result from your axios call, to avoid naming collision, you can rename the result and return this:
const { arrangers: fetchedArrangers } = await context.$axios.get('/api/arrangers')
const { composers: fetchedComposers } = await context.$axios.get('/api/composers')
return { fetchedArrangers, fetchedComposers }
EDIT, this is how I'd write it
async asyncData({ $axios }) {
const [posts, comments] = await Promise.all([
$axios.$get('https://jsonplaceholder.typicode.com/posts'),
$axios.$get('https://jsonplaceholder.typicode.com/comments'),
])
console.log('posts', posts)
console.log('comments', comments)
return { posts, comments }
},
When you destructure at the end of the result of a Promise.all, you need to destructure depending of the result that you'll get from the API. Usually, you do have data, so { arrangers } or { composers } will usually not work. Of course, it depends of your own API and if you return this type of data.
Since destructuring 2 data is not doable, it's better to simply use array destructuring. This way, it will return the object with a data array inside of it.
To directly have access to the data, you can use the $get shortcut, which comes handy in our case. Directly destructuring $axios is a nice to have too, will remove the dispensable context.
In my example, I've used JSONplaceholder to have a classic API behavior (especially the data part) but it can work like this with any API.
Here is the end result.
Also, this is what happens if you simply use this.$axios.get: you will have the famous data that you will need to access to later on (.data) at some point to only use the useful part of the API's response. That's why I do love the $get shortcut, goes to the point faster.
PS: all of this is possible because Promise.all preserve the order of the calls: https://stackoverflow.com/a/28066851/8816585
EDIT2: an example on how to make it more flexible could be
async asyncData({ $axios }) {
const urlEndpointsToFetchFrom = ['comments', 'photos', 'albums', 'todos', 'posts']
const allResponses = await Promise.all(
urlEndpointsToFetchFrom.map((url) => $axios.$get(`https://jsonplaceholder.typicode.com/${url}`)),
)
const [comments, photos, albums, todos, posts] = allResponses
return { comments, photos, albums, todos, posts }
},
Of course, preserving the order in the array destructuring is important. It's maybe doable in a dynamic way but I don't know how tbh.
Also, I cannot recommend enough to also try the fetch() hook alternative someday. I found it more flexible and it does have a nice $fetchState.pending helper, more here: https://nuxtjs.org/blog/understanding-how-fetch-works-in-nuxt-2-12/ and in the article on the bottom of the page.
First of: I'm a beginner at Vue.js/APIs so I hope my question is not too stupid (I may not be seeing the obvious) :)
So,
Using Vue.js I'm connecting to this API and want to track the history of each crypto-currencies (no issues with getting any data from the API).
Currencies information are accessible using a URL :
https://api.coinranking.com/v2/coins
And history is accessible using another :
https://api.coinranking.com/v2/coin/ID_OF_THE_COIN/history
As you can see the second url needs the id of the specific currency which is available in the first one.
I would like to find a way to make only 1 get request for all currencies and their history rather than having to make as many requests as available currencies there are (about 50 on this API), I've tried several things but none has worked yet (for instance using the coin url and storing ids of the currencies in a table then using the history url and modifying it with the ids of the table but hit a wall) .
Here's the axios get request I have for the moment for a single currency:
const proxyurl = "https://cors-anywhere.herokuapp.com/"
const coins_url = "https://api.coinranking.com/v2/coins"
const history_url = "https://api.coinranking.com/v2/coin/Qwsogvtv82FCd/history"
//COINS DATA
axios
.get(proxyurl + coins_url, {
reqHeaders
})
.then((reponseCoins) => {
// console.log(reponseCoins.data)
this.crypto = reponseCoins.data.data.coins;
})
.catch((error) => {
console.error(error)
})
//GET ALL COINS UUIDs
axios
.get(proxyurl + coins_url, {
reqHeaders
})
.then((reponseUuid) => {
this.cryptoUuidList = reponseUuid.data.data.coins;
//access to each crypto uuid:
this.cryptoUuidList.forEach(coinUuid => {
console.log("id is: " + coinUuid.uuid)
//adding uuids to table:
this.coinsUuids.push(coinUuid.uuid);
});
})
.catch((error) => {
console.error(error)
})
// COIN HISTORY/EVOLUTION COMPARISON
axios
.get(proxyurl + history_url, {
reqHeaders
})
.then((reponseHistory) => {
//get data from last element
const history = reponseHistory.data.data.history
this.lastItem = history[history.length-1]
// console.log(this.lastItem)
this.lastEvol = this.lastItem.price
// console.log(this.lastEvol)
//get data from previous element:
this.previousItem = history[history.length-2]
this.previousEvol = this.previousItem.price
})
.catch((error) => {
console.error(error)
})
I probably forgot to give some info so let me know and will gladly share if I can
cheers,
I took a look at the API, they do not seem to give a way for you to get everything you need in one request so you will have to get each coin history separately.
However, I do se a sparkline key in the returned data, with what seems to be a few of the latest prices.
I do not know your projects's specifics but maybe you could use that for your initial screen (for example a coins list), and only fetch the full history from the API when someone clicks to see the details of a coin.
I have this retarded amount of product in a collection on Shopify (over 50k products) and I would need to delete them all, is there a way I can automate that? All I can find on the internet is to use the "bulk edit tool" which is the most useless thing I've ever seen as it can only grab 50 products at a time.
I've tried automating a script to update the rows with the CSV export file, but it takes over 6 hours for 20K products to import. Plus, since there are hashtags in the title and handle, it apparently doesn't overwrite the products for some reason. So I just can't use the archive anymore...
Has anyone ran into this issue and found a solution?
Thank you!
When it comes to this kinds of tasks I usually write myself a quick dev console script that will do the job for me instead of relying on an app.
Here is a script that you can use in the dev console of your shopify admin page (just copy /paste):
let productsArray = [];
// Recursive function that will grab all products from a collection
const requestCollection = (collectionId, url = `https://${window.location.host}/admin/api/2020-10/collections/${collectionId}/products.json?limit=250`) => {
fetch(url).then(async res => {
const link = res.headers.get('link');
const data = await res.json();
productsArray = [...productsArray, ...data.products];
if(link && link.match(/<([^\s]+)>;\srel="next"/)){
const nextLink = link.match(/<([^\s]+)>;\srel="next"/)[1];
requestCollection(collectionId, nextLink)
} else {
initDelete(productsArray)
}
})
}
// Get CSRF token or the request will require password
const getCSRFToken = () => fetch('/admin/settings/files',{
headers: {
"x-requested-with": "XMLHttpRequest",
"x-shopify-web": 1,
"x-xhr-referer": `https://${window.location.host}/admin/settings/files`
}
}).then(res => res.text()).then(res => {
const parser = new DOMParser();
const doc = parser.parseFromString(res, 'text/html');
return doc.querySelector('meta[name="csrf-token"]').getAttribute('content')
})
// The function that will start the deleting process
const initDelete = async (products) => {
const csrfToken = await getCSRFToken();
products.forEach(item => {
fetch(`https://${window.location.host}/admin/api/2020-10/products/${item.id}.json`, {
method: "delete",
credentials: 'include',
headers: {
"x-csrf-token": csrfToken,
"x-requested-with": "XMLHttpRequest",
"x-shopify-web": 1,
"x-xhr-referer": `https://${window.location.host}/admin/settings/files`
}
})
})
}
And you start it by using requestCollection(ADD_YOUR_COLLECTION_ID_HERE).
To clarify the script, there are 3 main functions:
requestCollection - this handles the product grabbing from the collection. It's a recursive function since we can't grab more than 250 products at the same time.
getCSRFToken - this grabs the CSRF token since most of the post/update/delete request requires it or they will fail (I grab it from the files page)
initDelete - this function start the delete process where we stack all the request one of the other without waiting, you may want to await each request, but even if you crash your browser I think it will be still faster to repeat the process rather then wait for each request to finish.
If you plan to use this script please TEST IT BEFORE USING IT. Create a collection with a few products and run against that, in case there are issues. I've tested it on my side and it's working but it's a code I wrote in 10 minutes after midnight, there can be issues there.
Have in mind that this script will delete ALL products in the collection you specify in the requestCollection(1231254125) method.
PS: All of this can be done using a Private App as well with the products scope set to read/write, using a back-end language of your choice. The main difference will be that you won't need the CSRF token and most of the headers that I set above. But I like quick solutions that doesn't require you to pull out the big guns.