I am investigating different approaches to obtain select metadata (ALL objects - including custom - and ALL fields) for any org configuration, reliably. We will then use this information to build a .CSV.
My company works with the Net-Zero cloud and the NPSP. The metadata API does not have full coverage of all the objects and fields we require (https://developer.salesforce.com/docs/metadata-coverage/56).
We know this is possible, as the WORKBENCH REST API is able to retrieve the data we need, albeit on an object by object basis using REST API calls.
Approaches:
Build an APEX class that uses the schema class to retrieve all objects. Loop through objects, and retrieve all fields. Convert custom object into .CSV (either directly, or using a wrapper class to convert to JSON and then to .CSV).
Build a node.js server to perform REST API calls and then write the data to the local file system in .CSV format.
Parse the .XML directly from the Salesforce org. Any thoughts are welcome. We need this to be repeatable for any org configuration. Thankyou!
Related
Is it possible to retrieve my data in Azure Cosmos in JSON format and share it with someone else without them accessing the actual environment? Something like an HTTP get from sharepoint. I am new to cosmos and APIs, so sorry if I am using the wrong terms here.
Update Attempting Azure Function:
I attempted to create an HTTPTrigger. Can I copy and paste the JSON into function.json and javascript into index.js? I changed the databaseName and collectionName, but it doesn't return the cosmos documents.
General
I think the easiest way to offer someone access to a specified collection would be to create an Azure Function. From the docs:
Azure Functions allows you to run small pieces of code (called "functions") without worrying about application infrastructure. With Azure Functions, the cloud infrastructure provides all the up-to-date servers you need to keep your application running at scale.
A function is "triggered" by a specific type of event. Supported triggers include responding to changes in data, responding to messages, running on a schedule, or as the result of an HTTP request.
C#
Here's an example of how this might look if you want to query documents by id:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-input?tabs=csharp#http-trigger-look-up-id-from-query-string
If you want more complex queries to be executed, take a look at this section of the abovementioned documentation:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-input?tabs=csharp#http-trigger-look-up-id-from-route-data-using-sqlquery
So basically this enables you to provide a HTTP endpoint, that's configured to run specific query against your CosmosDB instance.
JavaScript
An example of how to set up a CosmosDB instance and create functions for CRUD operations in JS can be found here:
https://dev.to/vidamrr/cosmos-db-crud-operations-using-azure-functions-4d27
I'm using a tool called Teamwork to manage my team's projects.
The have an online API that consists of JSON files that are accessible with authorisation
https://developer.teamwork.com/projects/introduction/welcome-to-the-teamwork-projects-api
I would like to be able to convert this online data to an sql db so i can create custom reports for my management.
I can't seem to find anything ready to do that.
I need a strategy to do this..
If you know how to program, this should be pretty straightforward.
In Python, for example, you could:
Come up with a SQL schema that maps to the JSON data objects you want to store. Create it in a database of your choice.
Use the Requests library to download the JSON resources, if you don't already have them on your system.
Convert each JSON resource to a python data structure using json.loads.
Connect to your database server using the appropriate Python library for your database. e.g., PyMySQL.
Iterate over the python data, inserting rows into the database as appropriate. This is essentially the JSON-to-Tables mapping from step 1 made procedural.
If you are not looking to do this in code, you should be able to use an open-source ETL tool to do this transformation. At LinkedIn a coworker of mine used to use Talend Data Integration for solid ETL work of a very similar nature (JSON to SQL). He was very fond of it and I respected his opinion, so I figured I should mention it, although I have zero experience of it myself.
Is there a way if ODATA can be established on top of azure data lake analytics table via REST API's?
It seems there are REST API's to get ADLA job informations, account information etc.,
Is there any such existing API's to get data or is it possible to create API to access data via ODATA concept?
If you want to access data in ADLS files, there are REST APIs to get data from the lake. ADLS supports WebHDFS APIs with OAuth.
If you want to send queries and see their results or get U-SQL table data, you would have to write your own shim that translates the query you submit via your API into a U-SQL Script that outputs a file and then transparently download the file and returns it as the result.
Note that so-called interactive support is on the roadmap and being worked on. Once that is available, you can access the data using standard query APIs (such as ODBC, JDBC etc).
I have went through documentation and its a bit hard for me to grasp how one should go about writing adapter for anything. I want to ease the access of RESTful web services with SQL like interface for business folks.
Coarse requirements look something like:
Register data source, in this case endpoint
Add mapping for endpoint to table
Execute simple select queries
Allow joins to be performed on the basis of some join key but in client application memory
Represent the output in the tabular format
Try using Calcite's file adapter, which was just added in release 1.12.
The simplest use case is reading and parsing a CSV file from the file system, and presenting it as a table that can be used in a SQL statement. But in addition to files, the file adapter read documents via HTTP, and it can parse the contents of HTML tables. So you should be able to use it to read data from a REST service.
I've developed an app that uses Parse.com as the back end. I now need a dashboard analytics software package (such as iDashboards) that will enable me to pull data from my Parse.com database classes and present some of that data in a pretty dashboard fashion.
iDashboards looks to be the kind of tool i'm after, but it only supports certain data source inputs such as JDBC, ODBC, SQL, MySQL etc. Not being a database guru by any means, i'm not sure if Parse.com can be classed as any of the above, but from what i've read it doesn't come under any of these categories.
Can anybody recommend a way of either connecting Parse.com to iDashboard, or suggest another dashboard tool that will support Parse.com as a data source?
The main issue you are facing is that data coming out of Parse.com is going to be in json format. Most dashboards are going to prefer csv files.
The best dashboard I am aware of is Tableau and there is a discussion about getting json into Tableau here: http://community.tableau.com/ideas/1276
If your preference is using iDashboards then you need to convert the json coming out of Parse into a csv format that iDashboards can consume. You can do that using RJSON as mentioned in the post above but you'll probably have an easier time of it with a simple php or python script that periodically connects to Parse and pulls out data updates for you and then pushes it to your dashboard of choice.
Converting json to csv in php is addressed here: Converting JSON to CSV format using PHP
The difference is much more fundamental than "unsupported file format". In fact, JSON data coming out of Parse is stored in a so-called denormalized form, which means that a single JSON data file may contain the equivalent of arbitrarily many tables in a relational database. Stated differently, one JSON file may translated into potentially many CSV files, and there's no unique choice of how to perform that translation.
This is a so-called ETL problem, where ETL stands for Extract-Transform-Load. As such, you may be interested in open source ETL tools such as Kettle. Kettle is supported by Pentaho and includes functionality that can help you develop a workflow to turn JSON data into multiple CSV files that can then be imported into iDashboards (or similar). Aside from Kettle, Talend is also widely used for this purpose and has the same ability.
Finally, note that Parse is powered by MongoDB, and exports JSON data that is easily stored and manipulated in MongoDB. As such, a natural fit for reporting on Parse data is any reporting tool built for MongoDB.
As of the time of this writing, there are two such options:
JSON Studio, which is a commercial solution that is built explicitly for MongoDB and has your stated capability to produce dashboards.
SlamData, which is an open source solution, also built for MongoDB, which allows native SQL on the database. The current version does not have reporting capabilities (just CSV export), but the 2.09 version due out in June has reporting dashboards baked in.
An advantage of using a MongoDB reporting tool is that you will not have to wrangle your data into relational form. If it's heavily nested, using arrays, and so forth, it can be quite painful to develop an ETL workflow and keep it in sync with how the data is changing. Instead, all you have to do is built a script to pipe the raw data from Parse into a MongoDB instance (perhaps hosted by MongoLab or equivalent, if you don't want to host it yourself), and connect the MongoDB reporting tool on top.
You might also contact Parse and see if they have a recommended solution for this. It occurs to me they should probably bake some sort of analytical / reporting functionality into their APIs as this is such a common use case.
You can use Axibase Time-Series Database to ingest your data from parse.com and they have built in dashboards and widgets for visualization or you can just export data from ATSD to csv and use iDashboards.