Apache Calcite - Access RESTFul Service with SQL - sql

I have went through documentation and its a bit hard for me to grasp how one should go about writing adapter for anything. I want to ease the access of RESTful web services with SQL like interface for business folks.
Coarse requirements look something like:
Register data source, in this case endpoint
Add mapping for endpoint to table
Execute simple select queries
Allow joins to be performed on the basis of some join key but in client application memory
Represent the output in the tabular format

Try using Calcite's file adapter, which was just added in release 1.12.
The simplest use case is reading and parsing a CSV file from the file system, and presenting it as a table that can be used in a SQL statement. But in addition to files, the file adapter read documents via HTTP, and it can parse the contents of HTML tables. So you should be able to use it to read data from a REST service.

Related

Salesforce Org Metadata Retrieval Approaches

I am investigating different approaches to obtain select metadata (ALL objects - including custom - and ALL fields) for any org configuration, reliably. We will then use this information to build a .CSV.
My company works with the Net-Zero cloud and the NPSP. The metadata API does not have full coverage of all the objects and fields we require (https://developer.salesforce.com/docs/metadata-coverage/56).
We know this is possible, as the WORKBENCH REST API is able to retrieve the data we need, albeit on an object by object basis using REST API calls.
Approaches:
Build an APEX class that uses the schema class to retrieve all objects. Loop through objects, and retrieve all fields. Convert custom object into .CSV (either directly, or using a wrapper class to convert to JSON and then to .CSV).
Build a node.js server to perform REST API calls and then write the data to the local file system in .CSV format.
Parse the .XML directly from the Salesforce org. Any thoughts are welcome. We need this to be repeatable for any org configuration. Thankyou!

CosmosDB Rest API - Http Request

Is it possible to retrieve my data in Azure Cosmos in JSON format and share it with someone else without them accessing the actual environment? Something like an HTTP get from sharepoint. I am new to cosmos and APIs, so sorry if I am using the wrong terms here.
Update Attempting Azure Function:
I attempted to create an HTTPTrigger. Can I copy and paste the JSON into function.json and javascript into index.js? I changed the databaseName and collectionName, but it doesn't return the cosmos documents.
General
I think the easiest way to offer someone access to a specified collection would be to create an Azure Function. From the docs:
Azure Functions allows you to run small pieces of code (called "functions") without worrying about application infrastructure. With Azure Functions, the cloud infrastructure provides all the up-to-date servers you need to keep your application running at scale.
A function is "triggered" by a specific type of event. Supported triggers include responding to changes in data, responding to messages, running on a schedule, or as the result of an HTTP request.
C#
Here's an example of how this might look if you want to query documents by id:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-input?tabs=csharp#http-trigger-look-up-id-from-query-string
If you want more complex queries to be executed, take a look at this section of the abovementioned documentation:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-input?tabs=csharp#http-trigger-look-up-id-from-route-data-using-sqlquery
So basically this enables you to provide a HTTP endpoint, that's configured to run specific query against your CosmosDB instance.
JavaScript
An example of how to set up a CosmosDB instance and create functions for CRUD operations in JS can be found here:
https://dev.to/vidamrr/cosmos-db-crud-operations-using-azure-functions-4d27

Convert an online JSON set of files to a relational DB (SQL Server, MySQL, SQLITE)

I'm using a tool called Teamwork to manage my team's projects.
The have an online API that consists of JSON files that are accessible with authorisation
https://developer.teamwork.com/projects/introduction/welcome-to-the-teamwork-projects-api
I would like to be able to convert this online data to an sql db so i can create custom reports for my management.
I can't seem to find anything ready to do that.
I need a strategy to do this..
If you know how to program, this should be pretty straightforward.
In Python, for example, you could:
Come up with a SQL schema that maps to the JSON data objects you want to store. Create it in a database of your choice.
Use the Requests library to download the JSON resources, if you don't already have them on your system.
Convert each JSON resource to a python data structure using json.loads.
Connect to your database server using the appropriate Python library for your database. e.g., PyMySQL.
Iterate over the python data, inserting rows into the database as appropriate. This is essentially the JSON-to-Tables mapping from step 1 made procedural.
If you are not looking to do this in code, you should be able to use an open-source ETL tool to do this transformation. At LinkedIn a coworker of mine used to use Talend Data Integration for solid ETL work of a very similar nature (JSON to SQL). He was very fond of it and I respected his opinion, so I figured I should mention it, although I have zero experience of it myself.

What file format can be use to save/access data instead on database

There is a situation in my company where we are developing a light weight .net web application with least dependencies. Application will be used hosted on client server. However there will not be any internet connection and they will use application locally.
We do not want any type of database installation on client machine. We want to keep it as simple as possible on client side. for this purpose we want to save/access data from file, as data on client side will not exceed more than 100 000 rows. We are also concerned about the speed for accessing data.
Here I want to ask how the data should be saved in file so that it can be accessed fast? What file format should be?
Whether I can use any db file which does not require any database installation on client side.
You could save all data to a json file, this will become increasingly slow and prone to corruption.
Also, have a look at SqlLite.
You can try Sql Compact Edition or SqlLite. Both are file based solution and fit as per your need.
Advantage of using these two would be that you can perform almost all the database queries on it and the data retrieval will be very fast. Also the you can think of optimizing the data storage and create tables etc.
You can use SQLite which is heavily uses in such scenarios (among others used by Chrome and Firefox). It is even public domain, so no license costs etc.

How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface?

I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing.
Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command.
I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup.
My question is:
How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface?
I have considered:
Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language.
Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document?
Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious.
Thanks for your help.
This actually seems like rather simple problem when I think about it again.
DBUnit has plugins for Maven and Ant integration which run tests written in XML in the Maven POM file.
I'm going to take a similar approach and go ahead with the XML option using the Xerces-J parser and create a collection of Operation, Export and Compare objects which are run in order.