How to show historical data in Wirecloud? - fiware-wirecloud

I would like to show through Wirecloud some sensor´s historical data that I have stored in a Cosmos instance from Orion. The data is stored in a HDFS table.
Looking through the options, I chose the operator History Module to Linear Graph and the widget Linear Graph. Please let me know if there are others better suited for this.
The History Module to Linear Graph requests a HistoryMod Server URL, should I give the Cosmos url? I get an error when trying to open the widget documentation, so I don´t know how to proceed.
I saw that in the Santander example something similar was accomplished through CKAN, using the CKAN Resource Selector and Data Viewer Table widgets. Is this the only way to show historical data in Wirecloud (using CKAN)?

Related

Is there a way to access raw data stored in Youtrack?

In Youtrack reports, you can view the issues by two fields using creation date as y-axis and any other field as x-axis. But when you do that like in this graph you view number of issues that are currently in the state stated in x-axis. For example, if the x-axis is the state, then you will see the current states of the issues that are created in the date intervals of the y-axis. But I also want to see the number of issues in each state in a chronological way. I want to see the states (or some other field) of the issues in May 21, 2021 (not their current states but their states in May 21).
I know that Youtrack keeps the state changes and their dates and many other data like that because in different reports, I can see that the Youtrack uses past data but usually there is no way to download the data of those reports.
I want to access all those raw data. My plan is to create some reports that are not available in Youtrack Reports, using R or Python. Is there a way to access those raw data, or a guideline to access them?
The way to access raw data in YouTrack is through the REST API. For example, you can get the issue's activity data to retrieve the history of changes applied to the issue. This way you can identify how things have changed chronologically.
I can see that the Youtrack uses past data but usually there is no way to download the data of those reports.
Report's data can be accessed via API as well. The report's API endpoint is api/reports, however, it's not documented as it may be subject to change. In this case, we can't guarantee backward compatibility. If you are fine with it, you can still use it. To see the exact request, check the network requests in the browser when loading a report.

How to save API output data to a Dataset in Azure Data Factory

I'm currently working on a project in Azure Data Factory, which involves collecting data from a Dataset, using this data to make API calls, and thereafter taking the output of the calls, and posting them to another dataset.
In this way I wish to end up with a dataset containing various different data, that the API call returns to me.
My current difficulty with this is, that do not know how to make the "Web activity" (which I use to make the API Call) save its output to my dataset.
I have tried numerous different solutions found online, however none of them seem to work. I am not sure if the official documentation is outdated or if I'm misunderstanding parts of it. Below I've listed links to the solutions I've tried and failed:
Copy data from a REST source
Copy data from an HTTP source
(among others, including similar posts to mine.)
The current flow in my pipeline is, that a "Lookup" collects a list of variables named "User_ID". These user ID's are put in to a ForEach loop, which makes an API call with the "Web" activity, using each of the USER_ID's. And this is where in the pipeline I wish to implement an activity or other, that can post each of these Web activity outputs into my new dataset.
I've tried to use the "Copy data" activity, but all it seems to do, is copying data straight from one dataset to another, and not being able to manipulate the data (which i wish to do with my api call).
Does anyone have a solution to how this is done?
Thanks a lot in advance.
Not sure why you could not achieve this following Copy data from a REST endpoint. I tested the below which works fine. I used schema mapping feature of 'Copy data' activity.
For example, I used a sample API http://dummy.restapiexample.com/api/v1/employees as source and for my testing, I used CosmosDB as sink. Of course you can choose any other dataset as per your requirement.
Create 'Linked Service' for the REST API. For simplicity I do not have authentication for this API. Of course, you have that option if required.
Create 'Linked Service' for the target data store. In my case, it is CosmosDB.
Create Dataset for the REST API and link to the linked service created in #1.
Create Dataset for the Data store (in my case CosmosDB) and link to the linked service created in #2.
In the pipeline, add a 'Copy data' activity like below with source as the REST dataset created in #3 and sink as the dataset created in #4. Also, in my case I had to add schema mapping to select the employees array from the API output and map to each field in my datastore.
And voila, that's it. When I run the pipeline, it calls the REST API and saves the output in my DB with my desired mapping.

How can I move data from BigQuery or DataPrep to Firestore?

I just cleaned up my firestore collection data using DataPrep and verified the data via BigQuery. I now want to move the data back to Firestore. Is there a way to do this?
I have used manual method of exporting to JSON and then uploading using a code provided by AngularFirebase. But It is not automated as there is a need to periodically cleanup this data.
I am looking for a process within Google Cloud console. Any help will be appreciated
This is not an answer, more like a partial answer. I could not add a comment as I don't have 50 reputation yet. I am in a similar boat but not entirely the same situation. My situation being that I want to use a subset of BigQuery data and add it to Firestore. My thinking is to do the following:
Use the BigQuery API to query the data periodically using BigQuery Jobs' Load (in your case) or Query (in my case)
Convert it to JSON in code
Use batch commit in Firestore's API to update the firestore database
This is my idea and I am not sure whether this will work, but I will you know more once I am done with this. Unless someone else has better insights to help me and the person asking this question

Backfill Google Analytics in BigQuery

I'm looking for a workaround on the following issue. Hope someone can help.
I'm unable to backfill data in the ga_sessions_ table in BigQuery through product linking in GA. e.g. partition ga_sessions_20180517 is missing
This specific view has already been linked before. Google documentation says that historical load is only done once per view (hence, the issue) (https://support.google.com/analytics/answer/3416092?hl=en)
Is there any way to work around it?
Kind regards,
Martijn
You can use Google Analytics Reporting API to get the data for that view. This method has lot of restrictions like sometimes the data is sampled/only 7 dimensions can be exported in one call, but at least you will be able to fetch your data in a partitioned manner.
Documentation hereDoc
If you need a lot of dimensions/metrics in hit level format, scitylana.com has a service that can provide this data historically.
If you have a clientId set in a custom dimension the data-quality is near perfect.
It also works without a clientId set.
You can get all history as available through the API.
You can get 100+ dimensions/metrics in one batch into BQ.

Deleting rows in datastore by time range

I have a CKAN datastore with a column named "recvTime" of type timestamp (i.e. using "timestamp" as type at datastore_create time, as shown in this link). Example value for this column is "2014-06-12T16:08:39.542000".
I have a large numbers of records in the datastore (thousands) and I would like to delete the rows before a given date in "recvTime". My first thought was doing it using the REST API with the datastore_delete operation using a range filter, but it is not possible as described in the following Q&A.
Is there any other way of solving the issue, please?
Given that I have access to the host where CKAN server is running, I wonder if this could be achieved executing a regular SQL sentence on the Postgresql engine where the datastore is persisted. However, I haven't found information about manipulating the CKAN underlying datamodel in the CKAN documentation, so don't know if this a good idea or if it is risky...
Any workaround or information pointer is highly welcome. Thanks!
You could definitely do this directly on the underlying database if you were willing to dig in there (the structure is pretty simple with tables named after the corresponding resource id). You could even turn this into an API of your own using an extension (though you'd want to be careful about permissions).
You might also be interested in the new support (master only atm) for extending the DataStore API via a plugin in an extension - see https://github.com/ckan/ckan/pull/1725