how to load multiple tables via API in qlikview - qlikview

We are trying to create QVD by loading data through API. We are using QVSource web-connector.. Our custom API currently provides 2 tables (Customers & Orders)
When we connect QV Source and chose, JSON TO TABLE.. it is only showing first table and only creates load script for the first table.
How can I make QV Source produce load script for more then one table?
I am also using Fiddler web debugger which shows that the API in-deed is fetching 2 tables..
Screenshot attached.

Related

Connect different client's GA4 & UA accounts to one BigQuery project

How do I connect various different client's google analytics (GA4 & UA) to one instance of Big Query? I want to store the analytics reports on bigquery & then visualise it on a unified dashboard on Looker
You can set up the exports from Google Analytics to go to the same BigQuery project and transfer historical data to the same project as well.
Even if data is spread across multiple GCP projects, you can still query all from a single project. I would suggest you create a query that connects data from multiple sources together, you can then save it as a view and add it as a source in Looker, you can use it as a custom query in Looker or for best efficiency save the results of your query as a new reporting table that feeds into Looker.

Extracting and loading data from a rest API whit data factory automatically whenever there is a new report ID

I am aiming to create a service combining Azure Data Factory and Azure Databricks where one or several reports are extracted and stored from a Rest API (swagger). I have managed to obtain the data of the reports maually using each individual ID using copy data from rest API. However, I'd like to obtain all the reports that share the same name, and whenever there is a new one (new ID) automatically extract it and store it.
I asume that dynamizing this, could be done by querying the IDs for that specific name with a periodic trigger and if eventually there is a new one, the data would be queried by using the updated list of IDs obtained. However I can't get my mind around how to use this list as parameters I have neither found a function that could do this.
I have also tried to use Azure functions, but I am not sure if it would cover this need and I gave up as I did struggle in the config.
Could I get some help please?
Thanks in advance!
Santiago

How can I consume a SQL database to create / update records in Salesforce using apex?

Is it possible for a SQL database to be consumed from Salesforce? What happens is that account records are loaded into the SQL database and a code is generated, so what I am looking for is that every month Salesforce consult this database to verify that these accounts exist in Salesforce and if not then create or update these records in Salesforce.
It would be that from time to time Salesforce consults the database that is in Salesforce to obtain the accounts and create them in Salesforce.
Is it possible to get to this using apex?
From salesforce you cannot access different database. You need to have a Web Api on another application that can access the targeted database, From your Apex Code you can do callout to invoke that Web API.
If it is one time purpose. Better you just do the data extraction into CSV and Export the data from Salesforce as well. Doing any clean up and transformation required. Use data loader, data import wizard or any other method to reimport that data into Salesforce.

Access Database - Create a table using data from two different sources

I am a new grad and my programming/database skills are very rudimentary. I was tasked with creating a database and ran into this one issue that I can't solve. I have to create a report that shows some testing results. There are two types of tests - custom tests and core test. All custom tests have a core attached to them. So I have two test IDs - Core ID and Custom ID. The Custom ID always has a Core ID that it can be tracked back to.
So I can't find a way to consolidate both custom and core results in one place and use that as a record source for my report. I tried making a temp table to get custom and core but then I can't consolidate that data that overlaps when I have a custom result that has a core id attached to it as well. Should I look into using VBA? I've tried using update, union, append query etc. but I can't reach a solution.
How can I create a table that extracts data from two different sources? and removes the duplicates. I've used union (tried UNION and UNION ALL as well) query but it omits data that has a core and custom. Some guidance will be greatly appreciated
enter image description here
So I've attached a picture and this is where i get into troubles. The Custom is related to core and the data i need to fetch is in the core table. I have a table that links the custom to a specific core but then how do I tell Access to go to the core to fetch more details from the core table. Like I am having issues putting that logic in.

How do I create a BigQuery dataset out of another BigQuery dataset?

I need to understand the below:
1.) How does one BigQuery connect to another BigQuery and apply some logic and create another BigQuery. For e.g if i have a ETL tool like Data Stage and we have some data been uploaded for us to consume in form of a BigQuery. So in DataStage or using any other technology how do i design the job so that the source is one BQ and the Target is another BQ.
2.) I want to achieve like my input will be a VIEW (BigQuery) and then need to run some logic on the BigQuery View and then load into another BigQuery view.
3.) What is the technology used to connected one BigQuery to another BigQuery is it https or any other technology.
Thanks
If you have a large amount of data to process (many GB), you should do the transformation of the data directly in the Big Query database. It would be very slow to extract all the data, run it through something locally, and send it back. You don't need any outside technology to make one view depend on another view, besides access to the relevant data.
The ideal job design will be an SQL query that Big Query can process. If you are trying to link tables/views across different projects then the source BQ table must be listed in fully-specified form projectName.datasetName.tableName in the FROM clauses of the SQL query. Project names are globally unique in Google Cloud.
Permissions to access the data must be set up correctly. BQ provides fine-grained control over who can access, and it is in the BQ documentation. You can also enable public access to all BQ users if that is appropriate.
Once you have that SQL query, you can create a new view by sending your SQL to Google BigQuery either through the command line (the bq tool), the web console, or an API.
1) You can use BigQuery Connector in DataStage to read and write to bigquery.
2) Bigquery use namespaces in the format project.dataset.table to access tables across projects. This allows you to manipulate your data in GCP as it were in the same database.
To manipulate your data you can use DML or standard SQL.
To execute your queries you can use the GCP Web console or client libraries such as python or java.
3) BigQuery is a RESTful web service and use HTTPS