How to get different versions of file in OneDrive using MS Graph API? - onedrive

I am referring to the MSDN documentation for the Graph API, I am using python and graph explorer to get hold of different attributes for a file/item in OneDrive. But I am unable to find any information around managing and querying multiple versions for a file.

Related

Salesforce Org Metadata Retrieval Approaches

I am investigating different approaches to obtain select metadata (ALL objects - including custom - and ALL fields) for any org configuration, reliably. We will then use this information to build a .CSV.
My company works with the Net-Zero cloud and the NPSP. The metadata API does not have full coverage of all the objects and fields we require (https://developer.salesforce.com/docs/metadata-coverage/56).
We know this is possible, as the WORKBENCH REST API is able to retrieve the data we need, albeit on an object by object basis using REST API calls.
Approaches:
Build an APEX class that uses the schema class to retrieve all objects. Loop through objects, and retrieve all fields. Convert custom object into .CSV (either directly, or using a wrapper class to convert to JSON and then to .CSV).
Build a node.js server to perform REST API calls and then write the data to the local file system in .CSV format.
Parse the .XML directly from the Salesforce org. Any thoughts are welcome. We need this to be repeatable for any org configuration. Thankyou!

API Key use w/ Tableau

We would like to use the Podio API Key to directly connect to Tableau and have the data refreshed at a cadence set in Tableau. Is this possible?
Yes, connecting to data via an API is possible and there are a couple of ways to do it:
Option #1: Web Data Connector
A WDC is a hosted web application built with JavaScript that connects to an API, converts the data to a JSON format, and passes the data to Tableau. You'll require a webserver to host your WDC and JavaScript skills to write it. Once set up, anyone in your org can just grab the link and use it in Desktop. With WDCs, since the data connection is made when the end-user is requesting the WDC, you can build in customizations for your users. (Ex: users can add filter parameters or authenticate with their own user/pass to only get what they have access to). WDC connections are extracts and can be refreshed on Tableau Server and Online. If you're using Tableau Online you'll need to use Bridge to auto-refresh.
Option #2: Hyper API
The Hyper API allows you to create, modify, and update extract (.hyper) files that you can then publish to Tableau Server/Online on a regular cadence. It's available for Python, Java, .NET, and C++ so you will need skills in one of those languages. I suggest Python as we have the most samples for it. You'll also need a server where you can run the extract refresh and publish scripts on a schedule. With the Hyper API you are creating a single extract for everyone to connect to. Once published to Tableau Server/Online, end users can just connect to this data source directly without having to do any input but this also means the connection can't be customized per user or use case.
Option #3: Use a 3rd-party connector
If building your own connector doesn't appeal to you there are also plenty of services out there you can pay for that can bring your data into Tableau. Ex: tray.io, dataddo, and skyvia are a couple I found after a quick google search.

Analyze data in Google Cloud Datastore using Google Data Studio

I am new to databases, and have some data stored as entities in Google Cloud Datastore. I would like to be able to analyze and plot this data in a web interface, and it seems like Google Data Studio provides an easy-to-use way to do this. However, I'm a bit confused as to how I can actually use the two interfaces together; it seems like either Google Cloud Storage or Google BigQuery could be a middleman in between, but I'm not sure how this might work. Could anyone advise on whether using Google Data Studio would be the best approach to plotting/analyzing data in Google Cloud Datastore, and if so, offer tips on how I could go about this? There are a large number of tutorials but it seems like none that I've found have explained how to load data from the Datastore into a useable file for Data Studio.
Thanks!
As Graham Polley says, the question is answered here. The workaround to connect Cloud Datastore to Google Data Studio is to first export Datastore entities to BigQuery, as explained in this guide.
Then see this in order to connect Data Studio to BigQuery tables.
Finally in this blog post, there's a tutorial for building a dashboard with Google Data Studio and BigQuery.

Getting a user's marketing source from Google Analytics

I'm a backend developer who has no experience with Google Analytics, but I've a requirement to find a way to collect the Marketing Medium/Source for each user from Google Analytics and save it in my database, I've been searching and looking how to get it from an API request but I didn't find a way yet, could you guys help?
You can use the Google Python API to fetch the Google Analytics data. You can read more here.
Medium and Source information can be found out by using the dimension ga:sourceMedium
You can find more info about dimensions and metrics here
Following which you can setup a daily script and fetches the data from your Google Analytics account and dumps data into csv which you can successively load into your database using libraries such as psycopg2.

best way of migrating customised metadata associated with source component into Tridion environment

If we are migrating content from source Content Management System to Tridion, what is the best way of migrating customized metadata associated with the components(content) of source Content Management System into Tridion? Should we directly migrate it to the sql server or is there an option to migrate it in the form of some xml file, etc.?
Migrating directly into SQL Server is unsupported, and the entire system would be unsupported at that point, due to possible data consistency issues.
The most straightforward way is to read the data from the source system, and use the Tridion API to recreate the item.
If migrating metadata, some of the data would likely fit best into a taxonomy, which would mean you'd want to migrate the keywords / structure first, then tag the content as it came into Tridion.
You have a few options when migrating content into Tridion.
I can't understand from the above if you are talking about migrating to SQL server as an intermediate format, or directly into the Tridion database. Importing directly into the Tridion database is definitely not a supported solution, and could lead to unpredictable results.
You need to use the API, either the Core Service or the TOM.NET API (If you have Tridion 2011) or the old TOM API if not.
A popular approach is to export all content into an XML format that you can then process with a .NET application.
There's some good articles on migrating content into Tridion by Ryan Durkin here, and Nuno Linhares here.
As mention before, migrating directly into the Database is not an option if you are planning to use SDL Tridion as the final CMS.
Apart of the supported mechanism chosen for Migrate, play attention about how you are going to structure the metadata in the new CMS, as depending on the volume, structure, hierarchy, relation across metadata items the process can become complex.
Also play special attention at the Blueprint concept, as probably you can merge duplicated values from the old system into only one that is inherited.
Don't think only in how to put the metadata in the system, also how that Metadata will be used and maintained in the new CMS, in this case SDL Tridion
You can check also a recent post about Migration and plan Migration in general, in case adds some more information
Can we automate migrating to SDL Tridion?