Getting Steam Player's Inventory List (DOTA 2) - api

I read some answers from other pages and found http://steamcommunity.com/profiles//inventory/json/570/2 is how I can get the list of the player's inventory. After I went to that address, a lot of data came up. However, the problem is that the data is not presented properly. I got something like this.
{"success":true,"rgInventory":{"7905269096":{"id":"7905269096","classid":"771158876","instanceid":"782509058","amount":"1","pos":1},"7832200468":{"id":"7832200468","classid":"626495772","instanceid":"1463199080","amount":"1","pos":2},"7832199378":{"id":"7832199378","classid":"626495770","instanceid":"1463199082","amount":"1","pos":3},"7832197795":{"id":"7832197795","classid":"626495773","instanceid":"1463199083","amount":"1","pos":4},"7832127932":{"id":"7832127932","classid":"771156290","instanceid":"1463199085","amount":"1","pos":5},"7832128369":{"id":"7832128369","classid":"626495771","instanceid":"1463199086","amount":"1","pos":6},"7832128042":{"id":"7832128042","classid":"466386035","instanceid":"1463199087","amount":"1","pos":7},"7830087148":{"id":"7830087148","classid":"536091705","instanceid":"1463199088","amount":"1","pos":8},"7822471023":{"id":"7822471023","classid":"771179852","instanceid":"782509058","amount":"1","pos":9},"7797472279":{"id":"7797472279","classid":"771410455","instanceid":"782509058","amount":"1","pos":10},"7782683766":{"id":"7782683766","classid":"771181072","instanceid":"782509058","amount":"1","pos":11},"7631976019":{"id":"7631976019","classid":"771157018","instanceid":"782509058","amount":"1","pos":12}},"rgCurrency":[],"rgDescriptions":{"771158876_782509058":{"appid":"570","classid":"771158876","instanceid":"782509058","icon_url":"W_I_5GLm4wPcv9jJQ7z7tz_l_0sEIYUhRfbF4arNQkgGQGKd3kMuVpMgCwRZrh6GdUmV2uVefqzZAxsqDpH8eVO4Nb2CyAaiWsVUbt1mBngc3Zm32FdEXSSFBuQVD4Z97J3LgwOxDlDHfjc9z40ChfLKg86GW_CBqRXhIgJ1zaQ3WkhKx3uK","icon_url_large":"W_I_5GLm4wPcv9jJQ7z7tz_l_0sEIYUhRfbF4arNQkgGQGKd3kMuVpMgCwRZrh6GdUmV2uVefqzZAxsqDpH8eVO4Nb2CyAaiWsVUbt1mBngc3Zm32CZOBWOAUKgdCoUqtJKW0Q7rCFKTLTVowoQBhPHGhMOGCK_YrRq1JVAm2rA7CM1GhVgPNerBnXLi","icon_drag_url":"","name":"Ogre's
Caustic Steel Choppers","market_hash_name":"Ogre's Caustic Steel
Choppers","market_name":"Ogre's Caustic Steel
Choppers","name_color":"D2D2D2","background_color":"","type":"Uncommon
Swords","tradable":0,"marketable":0,"commodity":0,"market_tradable_restriction":"7","market_marketable_restriction":"7","descriptions":[{"type":"html","value":"Used
By: Alchemist"}
Is there any way to make it more neater so I can read the data? Or can anyone give me any ideas about how to process these data? Thanks heaps

This data is in JSON format(http://www.json.org/). It is suited to be consumed by applications. So you should write a little program that will read this data, parse it and query more elements. For example (just guessing here) that there an API where you can get an item by its ID (something like http://steamcommunity.com/items/7832200468).
The output of this program could be a list (text or HTML) of items with their names, values, rarity, etc
Edit: also this: Getting someone's Steam inventory

Related

Using an API to Extract All Comments from a Reddit Post

I am using the Reddit API (Pushshift) : https://github.com/pushshift/api
Using the documentation, I understand how I can use this to extract every comment containing the word "covid" that was left in a certain time period:
https://api.pushshift.io/reddit/search/comment?q=covid&after=3h&before=2h&size=1
The output looks something like this:
{"data":[{"subreddit_id":"t5_2qh6p","author_is_blocked":false,"comment_type":null,"edited":false,"author_flair_type":"richtext","total_awards_received":0,"subreddit":"Conservative","author_flair_template_id":null,"id":"j98zf27","gilded":0,"archived":false,"collapsed_reason_code":null,"no_follow":false,"author":"VamboRoolOkay","send_replies":true,"parent_id":41917615743,"score":1,"author_fullname":"t2_7uxkru5f","all_awardings":[],"body":"I will never believe that election fraud wasn't a significant factor. Go ahead - call it a conspiracy theory. But I also maintained that Covid was lab-created. Truth is the Daughter of Time.","top_awarded_type":null,"author_flair_css_class":null,"author_patreon_flair":false,"collapsed":false,"author_flair_richtext":[{"e":"text","t":"Conservative"}],"is_submitter":false,"gildings":{},"collapsed_reason":null,"associated_award":null,"stickied":false,"author_premium":false,"can_gild":true,"link_id":"t3_116l7ct","unrepliable_reason":null,"author_flair_text_color":"dark","score_hidden":true,"permalink":"/r/Conservative/comments/116l7ct/kamala_harris_plans_on_running_with_biden_in_2024/j98zf27/","subreddit_type":"public","locked":false,"author_flair_text":"Conservative","treatment_tags":[],"created_utc":1676866031,"subreddit_name_prefixed":"r/Conservative","controversiality":0,"author_flair_background_color":"","collapsed_because_crowd_control":null,"distinguished":null,"retrieved_utc":1676866047,"updated_utc":1676866048,"body_sha1":"328df3784d15f77b98a84418c4ce720822227cfe","utc_datetime_str":"2023-02-20 04:07:11"}],"error":null,"metadata":{"es":{"took":98,"timed_out":false,"_shards":{"total":828,"successful":828,"skipped":824,"failed":0},"hits":{"total":{"value":573,"relation":"eq"},"max_score":null}},"es_query":{"size":1,"query":{"bool":{"must":[{"bool":{"must":[{"simple_query_string":{"fields":["body"],"query":"covid","default_operator":"and"}},{"range":{"created_utc":{"gte":1676862433000}}},{"range":{"created_utc":{"lt":1676866033000}}}]}}]}},"aggs":{},"sort":{"created_utc":"desc"}},"es_query2":"{\"size\":1,\"query\":{\"bool\":{\"must\":[{\"bool\":{\"must\":[{\"simple_query_string\":{\"fields\":[\"body\"],\"query\":\"covid\",\"default_operator\":\"and\"}},{\"range\":{\"created_utc\":{\"gte\":1676862433000}}},{\"range\":{\"created_utc\":{\"lt\":1676866033000}}}]}}]}},\"aggs\":{},\"sort\":{\"created_utc\":\"desc\"}}","api_launch_time":1673017478.254743,"api_request_start":1676873233.6143198,"api_request_end":1676873233.7406816,"api_total_time":0.12636184692382812}}
My Question: Suppose I identify a post that contains the word "covid" - now, I want to retrieve every comment on this post : Is this possible to do?
For instance, based on the output of these results, I see that :
link_id: t3_116l7ct
parent_id:41917615743
Can I somehow use this information to write an API query to retrieve all comments from this post?
I tried the following query but got an empty result: https://api.pushshift.io/reddit/comment/search/?link_id=t3_116cjib
Thanks!

Postgres INSERT returning 'invalid input syntax' for json

Problem: Attempting to insert a JSON string into a Postgres table column of json datatype intermittently returns this error for some record insertion attempts but not others.
I confirmed using multiple third party 'JSON validator' apps that the JSON I am inserting is indeed valid, and I have confirmed that any single ' quote characters have been escaped with the double '' technique, and the issue persists.
What are some additional troubleshooting steps to consider?
Here is a scrubbed sample JSON I have attempted:
{"id": "jf4ba72kFNQ","publishedAt": "2012-09-02T06:07:28Z","channelId": "UCrbUQCaozffv1soNdfDROXQ","title": "Scout vs. Witch: a tale of boy meets ghoul (Official Version)","tags": ["L4D","TF2","SFM","animation","zombies","Valve","video game"],"description": "Howdy folks (he''s alive!). I made a new SFM video (October 2015), called \"Nick in a Hotel Room\". Please check it out: https://www.youtube.com/watch?v=FOCTgwBIun0\n\nAlso check out some early behind the scenes of Scout vs. Witch:\nhttps://www.youtube.com/watch?v=73tQEBgD09I\n\nYou can find links to my stuff on my website: http://nailbiter.net\n\n-----\n\nhey gang,\nI''m the animator who made this cartoon. Hope you like it.\n\nThis is my little mash-up of a bunch of stuff I like. What happens when the Scout from Valve''s Team Fortress 2 video-game walks into the wrong neighborhood (Left 4 Dead). Hilarity (and a bodycount) ensues. It was created using Source Film Maker (for all the dialog stuff and the montage at the beginning), and with TF2/Source SDK for the entire 300 alley-run sequence. I had already completed that part before SFM was released. The big zombie horde scenes and a couple others were shot in Left 4 Dead. I hope you get a kick out of it.\n\nStuff I did:\nI animated all of the characters (using Maya) except for the big crowd scenes and parts of the headcrab zombie (the crawling and the legs). The faces in the dialog scenes were animated in SFM.\n\nAlso did additional mapping, particles, motion graphics, zombie maya rigging, and created blendshapes for the Witch''s face to enable her to talk/emote. I didn''t do a full set, just the phonemes I needed for this performance. Inspiration for her performance was based on Meg Mucklebones (if you''ve ever seen Legend) mixed with the demon ladies in Army of Darkness. I have a feeling Valve had seen those movies too when they designed her..\n\nthanks for watching."}
I am answering this question by enumerating all the other troubleshooting steps I have found so far, either 'working knowledge' that 'field workers' will have, or a little more obscure (or buried in postgres docs which, while thorough, are esoteric) insights I have found thru my own trial & error
Steps
Make sure you have escaped any single quote ' characters by double-escaping with like ''
Make sure your JSON string is actually a single line string - JSON is very easy to copy as a multiline string, and postgres JSON columns will not accept this (easy as hitting backspace on any newline)
Most obscure I've found: even when encapsulated in a JSON string field, the ? question mark weirdly enough breaks the JSON syntax for postgres. Something like {"url": "myurl.com?queryParam=someId"} will return as invalid. Solve this by escaping the question mark like: {"url": "myurl.com\?queryParam=someId"}

Struggling to import analyst share price to GoogleSheets

I am trying to create a column that imports the analyst price target from TipRanks website.
I uploaded two images:
Image 1: you can see the cell that I want to import.
Image 2: you can see my function that doesn't work.
What should I change in order to get this live info?
Thanks.
The site you are checking is actually "javascript" generated thus import functions won't properly work on them.
To check, just try to import the whole site data. If it returns a javascript function, then it is javascript generated.
Sample (tipranks.com)
What you can do is actually try to find other sites that provide the same data.
I did find one with the same data you are looking for, 50.38 for csiq. Link is "https://www.marketwatch.com/investing/stock/csiq/analystestimates". And since data is shown as table, it would be easier to import using importhtml.
Cell formula is:
=INDEX(IMPORTHTML("https://www.marketwatch.com/investing/stock/csiq/analystestimates", "table", 5), 2, 2)
Sample output:
The table is the fifth one in the DOM, and INDEX(table, 2, 2) means getting the 2nd row 2nd column of the table.
If the site is no good for you, you can try finding other sites that would suit your needs. And then use either importhtml or importxml depending on the site structure.
When you inspect the network when the website is loading you will see that the prices come when calling the forecast endpoint https://www.tipranks.com/stocks/tsla/forecast. This in turn returns an html response which is probably generated with Javascript on the client because they use React on the frontend, but you can still see the preview in the Network tab of the browser dev tools.
You can then copy the preview in VSCode and prettify it, to try and pin point the span holding the price. Of course it won't be exact science, because the html tags are generated with some media queries, but you will get close enough to some extent.
After you get the xml path but you get an empty error, you can delete some tags until you get some text. Use search in google sheets to search for highest price label, and than continue adding tags until you get the desired value.
Here is what I managed to get:
Lowest price target:
=importxml("https://www.tipranks.com/stocks/snow/forecast", "/html/body/div[1]/div[1]/div[4]/div[1]/div[2]/div[1]/div[4]/div[2]/div[2]/div[4]/div[1]/div[1]/div[5]/span[2]")
Average price target:
=importxml("https://www.tipranks.com/stocks/snow/forecast", "/html/body/div[1]/div[1]/div[4]/div[1]/div[2]/div[1]/div[4]/div[2]/div[2]/div[4]/div[1]/div[1]/div[3]/span[2]")
Highest price target:
=importxml("https://www.tipranks.com/stocks/snow/forecast", "/html/body/div[1]/div[1]/div[4]/div[1]/div[2]/div[1]/div[4]/div[2]/div[2]/div[4]/div[1]/div[1]/div[1]/span[2]")
In time these methods might change depending on their development process, but you could use the above steps to update the script.
P.S. I wasn't satisfied with the marketwatch analyst price targets. I think the wisdom of the crowd is better on tipranks.
Try this one. Works perfectly fine on my personal Stock Portfolio on Google Sheets:
Lowest Price Target:
=importxml(CONCATENATE("https://www.tipranks.com/stocks/", A1,"/forecast"), "//*[#class='colorpurple-dark ml3 mobile_fontSize7 laptop_ml0']")
Average Price Target:
=importxml(CONCATENATE("https://www.tipranks.com/stocks/", A4,"/forecast"), "//*[#class='colorgray-1 ml3 mobile_fontSize7 laptop_ml0']")
Highest Price Target:
=importxml(CONCATENATE("https://www.tipranks.com/stocks/", A4,"/forecast"), "//*[#class='colorpale ml3 mobile_fontSize7 laptop_ml0']")

getAllPrebidWinningBids() returns something but getAllWinningBids() is empty

I've been struggling with pbjs and DFP for several days now and my current problem is the one described in the title: when I type pbjs.getAllPrebidWinningBids() in the console, something is returned but nothing is displayed on my test page, and when I type pbjs.getAllWinningBids(), an empty array is returned and I don't get why.
A few more info :
This is a test page on our server with no other competition;
We use custom price buckets;
In DFP, I have 5 line items from 0.00€ to 2.00€ (so a 0.50€ increment) that matches the custom price buckets in the code;
The bids are "redirected" in the correct price buckets;
The code works and an ad is displayed when I set up a self-promotion
campaign in DFP with a prebid snippet as a creative, so I suppose
that something is wrong with the price buckets.
Would someone have an idea of what is blocking the selection of the bid and the rendering?
Thanks!
EDIT : I've come to realize that it was actually a normal behaviour since pbjs.getAllPrebidWinningBids() returns the bids that won the auction but haven't rendered on the page yet, while pbjs.getAllWinningBids() returns those that won but have also rendered.
So my question now is why the hell is no ad rendered at all?!
Here's my code (with a few dummy values), in case someone understand what's wrong: https://jsfiddle.net/8ewz9rgb/2/
Not answering the original thread question, instead your new issue why no ads are rendering. This is because you are calling GPT's googletag.disableInitialLoad. This will not render ads until googletag.refresh is called, which doesn't happen because it's in a 'pbjs' queue and you are not loading Prebid here, it is 404ing.

Use custom function to populate gSpreadsheet cell based on a XML/JSON response

Ok, this one has become a little tricky for me and I really need some assistance to work through it.
Problem
I have a GSpreadsheet which has a list of data, in this case Twitter usernames. Using the API of a service provider (in this case the Klout API), I would like to retrieve information about that user to populate a cell within a spreadsheet.
Based on what I can work out so far, I would need to write a custom function to do this but I have no idea where to start, how I might construct it, or if there are any examples of doing this.
Scenario
The Klout API can return either an XML or JSON response (see http://developer.klout.com/docs/read/api/API), based on the string passed. For example, the URL:
http://api.klout.com/1/users/show.xml?key=SECRET&users=thewinchesterau
would return the following XML response:
<users>
<user>
<twitter_id>17439480</twitter_id>
<twitter_screen_name>thewinchesterau</twitter_screen_name>
<score>
<kscore>56.63</kscore>
<slope>0</slope>
<description>creates content that is spread throughout their network and drives discussions.</description>
<kclass_id>10</kclass_id>
<kclass>Socializer</kclass>
<kclass_description>You are the hub of social scenes and people count on you to find out what's happening. You are quick to connect people and readily share your social savvy. Your followers appreciate your network and generosity.</kclass_description>
<kscore_description>thewinchesterau has a low level ofinfluence.</kscore_description>
<network_score>58.06</network_score>
<amplification_score>29.16</amplification_score>
<true_reach>90</true_reach>
<delta_1day>0.3</delta_1day>
<delta_5day>0.5</delta_5day>
</score>
</user>
</users>
Based on this response, I would like to be able to populate different cells with the values returned within the XML (or JSON if easier) packet.
So, for example, I would have a spreadsheet like the following which would have custom functions to go out and retrieve the value of the relevant XML element response to populate the cell:
Cell A B C D E
1 Username kscore Network score Amplification score True reach
2 thewinchester =kscore(A2) =nscore(A2) =ascore(A2) =tscore(A2)
Questions
Are there any gSpreadsheet examples you know of that use an API to pull data in from an external source?
How would one write a custom function to fetch the result from the API and populate a cell with a result of a specific element?
Any information, examples or helpers you have are greatly appreciated.
You want the importXML function, documented here. The formula you want will look something like this:
=importXML("http://api.klout.com/1/users/show.xml?key=SECRET&users=" + A1, "//users/user/score/kscore")
You could write a custom script with Google AppScript, but there's a simple solution to this similar to what Nick Johnson posted. I've tested this against the score function, but it could be easily adapted to the show endpoint with different XPath.
=importXML("http://api.klout.com/1/klout.xml?users="&A1&"&key=YOUR_API_KEY", "//users/user/kscore")
This presumes your Twitter IDs are in the A column.
Note, Google Docs limits the number of such importXML functions to 50 per spreadsheet. You could concatenate groups of 5 userids for each importXML call, effectively putting your limit to 250 a sheet.
This could also be adapted to a similar call in Excel that doesn't have that limit. Keep in mind the Klout ToS, though, using proper attribution and rate limits.