Is there any command line client to do data entry in DHIS2? - dhis-2

I want to know is there any command line client to do data entry in DHIS2?
I found one, named as dish (https://github.com/baosystems/dish2/), but it is only used for simplifying common tasks and is suitable for handling batch metadata operations, system maintenance operations.
I want to enter data into data elements directly, is it possible? If not there is any alternative method to it?

as far as I know, there are no command line clients to do data entry for DHIS2. There are however options to import data into DHIS2 using xml, json or csv formats. So one option is to create the data in one of these formats first, then use the API to import it.
When you say you want to enter data into data elements directly, I assume you are referring to actual data and not metadata.
There is no way to interact with the DHIS2 api to add data directly to a data element. The reason for this is that data elements are either connected to a data set or, if you are using the tracker models, a program stage. A single data element can be connected to multiple data sets or program stages, so adding data directly to a data element wouldn't make sense.
You can however do data entry for a data element, but you need to go through either a data set or program stage that uses the data element.
What is your use-case for needing a command line client for this? Maybe I know of another solution that would help you.

Related

How to save API output data to a Dataset in Azure Data Factory

I'm currently working on a project in Azure Data Factory, which involves collecting data from a Dataset, using this data to make API calls, and thereafter taking the output of the calls, and posting them to another dataset.
In this way I wish to end up with a dataset containing various different data, that the API call returns to me.
My current difficulty with this is, that do not know how to make the "Web activity" (which I use to make the API Call) save its output to my dataset.
I have tried numerous different solutions found online, however none of them seem to work. I am not sure if the official documentation is outdated or if I'm misunderstanding parts of it. Below I've listed links to the solutions I've tried and failed:
Copy data from a REST source
Copy data from an HTTP source
(among others, including similar posts to mine.)
The current flow in my pipeline is, that a "Lookup" collects a list of variables named "User_ID". These user ID's are put in to a ForEach loop, which makes an API call with the "Web" activity, using each of the USER_ID's. And this is where in the pipeline I wish to implement an activity or other, that can post each of these Web activity outputs into my new dataset.
I've tried to use the "Copy data" activity, but all it seems to do, is copying data straight from one dataset to another, and not being able to manipulate the data (which i wish to do with my api call).
Does anyone have a solution to how this is done?
Thanks a lot in advance.
Not sure why you could not achieve this following Copy data from a REST endpoint. I tested the below which works fine. I used schema mapping feature of 'Copy data' activity.
For example, I used a sample API http://dummy.restapiexample.com/api/v1/employees as source and for my testing, I used CosmosDB as sink. Of course you can choose any other dataset as per your requirement.
Create 'Linked Service' for the REST API. For simplicity I do not have authentication for this API. Of course, you have that option if required.
Create 'Linked Service' for the target data store. In my case, it is CosmosDB.
Create Dataset for the REST API and link to the linked service created in #1.
Create Dataset for the Data store (in my case CosmosDB) and link to the linked service created in #2.
In the pipeline, add a 'Copy data' activity like below with source as the REST dataset created in #3 and sink as the dataset created in #4. Also, in my case I had to add schema mapping to select the employees array from the API output and map to each field in my datastore.
And voila, that's it. When I run the pipeline, it calls the REST API and saves the output in my DB with my desired mapping.

How to add more data to be stored in jenkins rest api

To make the question simple, I know that I can get some build information with https://jenkins_server/...///api/json|xml|python. And I get a lot of information for that build record.
However, I want to add more information to that build record. For example, the docker image created, or the tickets or files changed from last build to create release note, ... etc. How do I do that?
For now, I use a script to create a json file as an artifact and call that json file to get these information, but it seems a duplicate if I can add more data to the jenkins build object directly.
The Jenkins remote access API is designed to provide access to generic Jenkins-internal information, like build numbers, timestamps, fingerprints etc.
If you want to add your own data there, then you must extend Jenkins accordingly, e.g., by designing a plugin that advertises your (custom) information items as standard Jenkins-"internal" data. If you want to do that, you may want to have a look at they way fingerprint information is handled (I found that quite instructive).
However, I'd recommend that you stick with your current approach, and keep generic Jenkins-internal information separated from Job-specific data. It is less effort and clearly separates your own data from Jenkins' data.

Send data from a sheet object to a connector

I know that using a data pipe with QVX, QlikView can request data from a connector:
But I was wondering is whether we can send the data currently in one sheet object such as a table object or a multibox object back to a connector. Because as can be seen from the image above the data pipe can only stream from the custom connector to QlikView and not the other way round.
I cannot give you a 100% definitive answer, but I would say no for the following reason:
QVXs are used via a QlikView load script to obtain data from a custom data source, and as such are only able to be executed when the script is executed on a reload. QVXs cannot be executed outside of a load script, and furthermore, a load script does not and cannot access document objects such as charts or dimension filters so as a result, even if you could pass data back, you could not feed it from a document object.

Cocoa Core Data and plain text file

I'm wrinting a very simple application which can read data from a specified file and populate the fetched data to a NSTableView. The file is a plain text file where each line represents an object (I need to parse the file). I want to use Core Data and my question is what is the Cocoa way to do that?
My first idea is to parse the file and create instances for the Entity which represents one line. I'm not sure that is it the best solution. And later I'll write out the changes to the file (after save? or automatically after a given amount of time?)
My configuration is Mountain Lion and the latest XCode.
A single entity with a number of attributes sounds good. If you had an attribute which holds a reasonable amount of data and was repeated on a number of 'rows' then it would be a candidate for another entity.
Yes, each instance of your entity would represent one row in your table.
Personally, I would either save on request by the user, or not have a save button and save each change. Your issue is the conversion between your in-memory store and on-disk store. Using a plain text file doesn't help (though, depending on the file format, it could be possible to edit individual lines in the file using NSFileHandle).
Generally it would still make more sense to use a better format like XML or JSON and then make use of a framework like RestKit which will do all of the parsing and mapping for you (after you specify the mapping configuration).
You can also use bindings to connect your data to your NSTableView. Another ref.

Need suggestions on which option will be efficient to store data on iPad

This is my first time that I am working on a big project for a client. So I was not sure how to solve this problem. However I have come up with two different ideas but I need professionals opinion about which one is better :)
Situation :
There is an application which runs on different client's iPad. Application data is stored by using giant XML file. This XML file is shared among all client by a server. So a server has a centralised copy and each client has their own copy. Once client made changes to their XML copy they updates server copy in and other client updates their copy by updated server copy.
Now only one client can make changes at one time, To fix this I have logic by which before client starts editing XML they need to get ownership from server and server will only allow one client to edit at one time.
Visual Representation :
Now on client side I have to think of a logic by which I will update my client copy and upload it to server. There are two options,
Option 1 :
In option 1, I can directly manipulate XML file by using GDataXML parser and upload that copy to server. For persistence I can save client copy on my iPad in document directory.
Option 2 :
In option 2, I can read XML file create a CoreData representation for local storage. When ever I update data inside core data it will I will change XML file too and than upload that file on server. Double work but I guess better persistence.
Now which one more robust and advisable? Personally I was planning to do option 2 because it seems more robust as I am persisting application data in core data. But option 1 seems more easy work but I don't know how good persistency will remain.
Sorry for lengthy question,
Thanks for any input given.
There are a number of factors which would influence selecting the second option over the first.
How big is the XML file? If you need to work with very large documents, you may need to incrementally parse the XML (SAX) into core data. This will allow you to access the document's contents without loading it all into memory at once.
Do you need to run complex queries in the data? If so, you may be better off using core data fetch predicates, rather than xpath or XSL.
Are you already using core data? Depending on how the XML data is structured, it might be simpler overall to import the data into your existing persistent store.
Otherwise, you can probably make due with parsing the entire document and either traversing the resulting tree or querying with xpath.
If you need to create an object graph based on what you get from server and show it to user (which you most probably need to do), you should stick up to second option, since it allows easy and robust data persistence.
If you do not need to present user with any data from the XML file you can, of course, store it in the Documents directory.
So, if this is a client application and it has at least some visual representation of the data from an XML file you should use CoreData.
If you want a regular update of data , then use CoreData