I know that using a data pipe with QVX, QlikView can request data from a connector:
But I was wondering is whether we can send the data currently in one sheet object such as a table object or a multibox object back to a connector. Because as can be seen from the image above the data pipe can only stream from the custom connector to QlikView and not the other way round.
I cannot give you a 100% definitive answer, but I would say no for the following reason:
QVXs are used via a QlikView load script to obtain data from a custom data source, and as such are only able to be executed when the script is executed on a reload. QVXs cannot be executed outside of a load script, and furthermore, a load script does not and cannot access document objects such as charts or dimension filters so as a result, even if you could pass data back, you could not feed it from a document object.
Related
I'm trying to use the HTTP connector to read a CSV of data from the BoE statistical database.
Take the SONIA rate for instance.
There is a download button for a CSV extract.
I've converted this to the following URL which downloads a CSV via web browser.
[https://www.bankofengland.co.uk/boeapps/database/_iadb-fromshowcolumns.asp?csv.x=yes&Datefrom=01/Dec/2021&Dateto=01/Dec/2021 &SeriesCodes=IUDSOIA&CSVF=TN&UsingCodes=Y][1]
Putting this in the Base URL it connects and pulls the data.
I'm trying to split this out so that I can parameterise some of it.
Base
https://www.bankofengland.co.uk/boeapps/database
Relative
_iadb-fromshowcolumns.asp?csv.x=yes&Datefrom=01/Dec/2021&Dateto=01/Dec/2021 &SeriesCodes=IUDSOIA&CSVF=TN&UsingCodes=Y
It won't fetch the data, however when it's all combined in the base URL it does.
I've tried to add a "/" at the start of the relative URL as well and that hasn't worked either.
According to the documentation ADF puts the "/" in for you "[Base]/[Relative]"
Does anyone know what I'm doing wrong?
Thanks,
Dan
[1]: https://www.bankofengland.co.uk/boeapps/database/_iadb-fromshowcolumns.asp?csv.x=yes&Datefrom=01/Dec/2021&Dateto=01/Dec/2021 &SeriesCodes=IUDSOIA&CSVF=TN&UsingCodes=Y
I don't see a way you could download that data directly as a csv file. The data seems to be manually copied from the site, using their Save as option.
They have used read-only block and hidden elements, I doubt there would any easy way or out of the box method within ADF web activity to help on this.
You can just manually copy-paste into a csv file.
I'm currently working on a project in Azure Data Factory, which involves collecting data from a Dataset, using this data to make API calls, and thereafter taking the output of the calls, and posting them to another dataset.
In this way I wish to end up with a dataset containing various different data, that the API call returns to me.
My current difficulty with this is, that do not know how to make the "Web activity" (which I use to make the API Call) save its output to my dataset.
I have tried numerous different solutions found online, however none of them seem to work. I am not sure if the official documentation is outdated or if I'm misunderstanding parts of it. Below I've listed links to the solutions I've tried and failed:
Copy data from a REST source
Copy data from an HTTP source
(among others, including similar posts to mine.)
The current flow in my pipeline is, that a "Lookup" collects a list of variables named "User_ID". These user ID's are put in to a ForEach loop, which makes an API call with the "Web" activity, using each of the USER_ID's. And this is where in the pipeline I wish to implement an activity or other, that can post each of these Web activity outputs into my new dataset.
I've tried to use the "Copy data" activity, but all it seems to do, is copying data straight from one dataset to another, and not being able to manipulate the data (which i wish to do with my api call).
Does anyone have a solution to how this is done?
Thanks a lot in advance.
Not sure why you could not achieve this following Copy data from a REST endpoint. I tested the below which works fine. I used schema mapping feature of 'Copy data' activity.
For example, I used a sample API http://dummy.restapiexample.com/api/v1/employees as source and for my testing, I used CosmosDB as sink. Of course you can choose any other dataset as per your requirement.
Create 'Linked Service' for the REST API. For simplicity I do not have authentication for this API. Of course, you have that option if required.
Create 'Linked Service' for the target data store. In my case, it is CosmosDB.
Create Dataset for the REST API and link to the linked service created in #1.
Create Dataset for the Data store (in my case CosmosDB) and link to the linked service created in #2.
In the pipeline, add a 'Copy data' activity like below with source as the REST dataset created in #3 and sink as the dataset created in #4. Also, in my case I had to add schema mapping to select the employees array from the API output and map to each field in my datastore.
And voila, that's it. When I run the pipeline, it calls the REST API and saves the output in my DB with my desired mapping.
I want to know is there any command line client to do data entry in DHIS2?
I found one, named as dish (https://github.com/baosystems/dish2/), but it is only used for simplifying common tasks and is suitable for handling batch metadata operations, system maintenance operations.
I want to enter data into data elements directly, is it possible? If not there is any alternative method to it?
as far as I know, there are no command line clients to do data entry for DHIS2. There are however options to import data into DHIS2 using xml, json or csv formats. So one option is to create the data in one of these formats first, then use the API to import it.
When you say you want to enter data into data elements directly, I assume you are referring to actual data and not metadata.
There is no way to interact with the DHIS2 api to add data directly to a data element. The reason for this is that data elements are either connected to a data set or, if you are using the tracker models, a program stage. A single data element can be connected to multiple data sets or program stages, so adding data directly to a data element wouldn't make sense.
You can however do data entry for a data element, but you need to go through either a data set or program stage that uses the data element.
What is your use-case for needing a command line client for this? Maybe I know of another solution that would help you.
I wonder if Kettle (AKA Pentaho PDI) supports metadata changing at run-time.
I've implemented a couple of custom plugins:
The first plugin sends data to the second plugin. The metadata of the rows sent in output can change when some conditions occur. In practice, this means that processRow() starts with a certain metadata and then, after a while, it changes it. Of course, the row sent in output through putRow() is always synchronized with the related metadata.
The second plugin receives data from the first plugin, calling getInputRowMeta() for understanding the metadata of the received row. However, such metadata seems to not be synchronized with the received row.
Given the results of this simple example, I wonder if the Kettle engine supports this kind of run-time behavior --- i.e. if getInputRowMeta() returns the correct metadata for the specific row that has been received.
Is anybody able of providing evidence that metadata changing is actually not possible ? Otherwise, is there any safe way for getting the metadata of the specific row received in processRow() ?
From page 616 of the book Pentaho Kettle Solutions:
The calculation of the output row
metadata is something that needs to happen once and only once because the layout of
all the output rows needs to be the same.
Can i insert new data/update the existing data into Essbase using Excel add-In/Smart view like I update the data into Palo Multidimentional database?
regards,
Sri.
Yes. This is what Lock & Send is used for. After you have drilled to an intersection that you would to update/load/change data in, you enter it directly in within Excel. Then perform a Lock operation using the add-in or SmartView. This tells Essbase that you would like to update data that is currently being shown on your spreadsheet. Then perform a Send operation. This will upload all of the data on your sheet back to the database, assuming that you have access to change that data (if you are a read-only user or don't have sufficient filter access, for example, then you can't change the data). Note that all of the data in the spreadsheet will be sent up -- so it is useful to navigate to the smallest possible subset of data that you would like to change.
After sending the data, it will automatically be unlocked. Then just retrieve the sheet to verify that the data you uploaded did in fact upload. If you are trying to upload to members that are dynamic calc, for example, then it won't work. Also note that typically data is loaded such that every intersection point is a Level-0 member, if not then it is possible that a subsequent aggregation/calc in the database might erase the data you just uploaded.