Morning,
Here is my problem,
I am trying to save the data captured from the Condition Monitoring system,the code we are using now is only for getting the parameters. I'd like to save the files which has crossed the threshold.
Now I have the waveform from the data, how can I get the files through it?
The files should be TDMS, the data is acquired by FPGA.
Thank you very much!
Kind regards
Jialin
You can use TDMS Write and set there your channels and groups. Also you can set properties to your data using TDMS properties.
Have a look on examples. For examples this one seems good for you: TDMS Write Triggered Data VI: labview\examples\File IO\TDMS\Standard Read and Write
Related
I run my program like this, but the text is not distributed according to the column. I want the data to be distributed according to the column. I read the csv file that came from Twitter. Please help find a solution
I am trying to create an Age Analysis for Creditors using a dynamic date slicer.
I followed each individual step specified on David Churchward's Blog, but I'm not able to replicate what he suggested there.
Herewith is the result of what I tried:
I'm expecting to see these values each in their own Ageing bucket based on what is outstanding.
Please download my PBIX file to see for yourself, then please advise what I did wrong.
The Excel source for PBIX is also in the folder.
Thank you.
The blog that you're referring is quite old and DAX has changed a lot since then.
Additionally PowerBI now has a in-built feature called binning which can do something similar to what you're looking for.
I was able to generate the below output using that feature which automatically groups the data based on the bin size.
There also a related feature called "Grouping" where you can manually choose the groups and their range. If you're up for it you can use this too. Below is the output for that:
I uploaded the file with these changes in the same folder.
Another resource that might be helpful for you is Radacad's article on dynamic banding
I need to design a pipeline using Nifi, but I have some questions as I am thinking between two approaches and I am unsure which processors to use, so maybe you can help me.
The scenario is the following: I need to ingest some .csv files into my HDFS, those do not contain a date I want to use to partition the Hive tables I will later use, so I thought of two options:
At some point during the .csv treatment, create some kind of code snippet that is launched from Nifi to modify the .csv file adding the column with the date.
Create a temporary (internal?) table on hive, alter the table adding the column and finally add it to the table where I partition by date.
I am unsure which option is better (memory-wise, simplicity, resource management) or maybe if its even possible, or even if there is a better way to do it. Also I am unsure of which are the Nifi processors to use.
So any help is appreciated guys, thanks.
You should be able to do #1 easily in NiFi without writing any code :)
The steps would be something like this:
Source processor to get your CSV from somewhere, probably GetFile
UpdateAttribute to add an attribute for the current date
UpdateRecord with a CsvReader and CsvWriter, adds a new date field
with the value from #2
I've created an example of how to do this and posted the template here:
https://gist.githubusercontent.com/bbende/113f8fa44250c09a5282d04ee600cd09/raw/c6fe8b1b9f31bb106f9c816e4fd5ea90ebe19f80/CsvAddDate.xml
Save that xml file and use the palette on the left of NiFi canvas to upload it as a template. Then instantiate the template from the top toolbar by dragging on the template icon.
I'm pretty new on Jedox, and I'm trying for internal use to export some specific "warning logs" (eg. "(Mapping missing)" ) into an excel/wss file.
And I don't how to do that...
Can you help me please.
Regards,
Usik
The easiest way to get these information is to use the Integrator in Jedox.
There you have the possibility to use a File Extract and then you can filter the information you are searching for.
After that it's possible to load these filtered information into a File.
The minimum steps you'll need are Connection -> Extract -> Transform -> Load.
Please take a look at the sample projects that are delivered with the Jedox software. In the example "sampleBiker", there are also file connections, extracts etc.
You can find more samples in:
<Install_path>\tomcat\webapps\etlserver\data\samples
I recommend to check the Jedox Knowledgebase.
The other way (and maybe more flexible way) would be to use, for example, a PHP macro inside of a Jedox Web report and read the log file you're trying to display.
If you've got a more specific idea what you'd like to do, please let me know and I'll try to give you an example how to do so.
I've studied BPMN in coursework; this is my first time applying it in real-work scenarios that don't follow any of my textbook examples.
I am trying to illustrate a process where a client can either upload a CSV file, manually enter records, or both. At the end of the day, all records are loaded to a production database via a script. At the moment, I've got it like this:
But, unless one reads the notes attached to each object, this tells me that uploaded AND manual data will be present.
In BPMN how would I designate that Path "A", Path "B" OR both, could be valid? How do I label the gateway? The scripting step I anticipate putting between the data input and the production database, but I'm not quite sure, again, how to specify that the script runs ONCE based on the presence of data from EITHER feed, not both.
What would this typically look like, and thanks in advance.
In BPMN to express that Path A, Path B or both could be valid ways forward, you can use an "inclusive or" gateway. I would typically label the split with a question and the outgoing pathes with the "answers", iow conditions under which the pathes are activated. If I understand your example correctly, a possible solution could look like the following.
Whether you want to use the task types I used, depends a bit on your more specific context. My task types in that example would mean that for the "upload" the process is "waiting for an incoming message", while in the case of manual entry it is "waiting for a user to complete the task" (by entering the required data).
The example also assumes that you know before you reach the inclusive or gateway which channels you will want to use this time.