I'm pretty new on Jedox, and I'm trying for internal use to export some specific "warning logs" (eg. "(Mapping missing)" ) into an excel/wss file.
And I don't how to do that...
Can you help me please.
Regards,
Usik
The easiest way to get these information is to use the Integrator in Jedox.
There you have the possibility to use a File Extract and then you can filter the information you are searching for.
After that it's possible to load these filtered information into a File.
The minimum steps you'll need are Connection -> Extract -> Transform -> Load.
Please take a look at the sample projects that are delivered with the Jedox software. In the example "sampleBiker", there are also file connections, extracts etc.
You can find more samples in:
<Install_path>\tomcat\webapps\etlserver\data\samples
I recommend to check the Jedox Knowledgebase.
The other way (and maybe more flexible way) would be to use, for example, a PHP macro inside of a Jedox Web report and read the log file you're trying to display.
If you've got a more specific idea what you'd like to do, please let me know and I'll try to give you an example how to do so.
Related
I am trying to generate a server with openapi-generator, I want to select the supporting files I create. I understand there is the option -DsupportingFiles=StringUtil.java. This example come from this page.
My question is quite simple how can I choose the list of files to generate. I succeed to generate one file at a time. In the docs they talk about a CSV format, I have tried all those options, but nothing seems to work.
-DsupportingFiles=encoder.py,util.py
-DsupportingFiles="encoder.py","util.py"
-DsupportingFiles=encoder.py;util.py
Anyone know if this is possible or even if it is the right way to do this ?
I want to get errors generated by system in Pentaho Kettle and expose it as results in transformation or job, for example i want to get errors of the HL7 input from log and expose it as results in the next step.
I want to get errors generated by system
You mean like Apache or MySQL errors? If that's the case, you may just point a Pentaho transformation to those files. They usually have a default place like /var/logs/apache2 and that would be pretty easy to read.
The part that's not that easy is if you want to parse those errors into something easier to analyse. For that I would use "load file in memory" and some "regex evaluation" steps to get the data you want out of the raw text.
But, there are better solutions for reading your logs and analyzing errors.
See LogStash for more info or similar products.
You could you save those results in a temporary csv file that the next step(s) can consume.
If you go with this solution I would recommend:
Adding a unique jobID or identifier in the file name to ensure that your next step is reading the right file.
Adding a step at the end that removes old temp files
As part of our work, we need to index our client's configuration files into splunk and prepare reports on them for them. We need to reports in splunk similar to their existing reporting framework, we need to allow users to view specific configuration files and they might compare two different files or perform diff etc.
I would like to see if there is a way to view the whole file in pop up window in splunk search.? If it is not already defined, could you please provide me the way to achieve it.?
Depending on the version you use, there is a down arrow by each log entry. Click that and then click "Show Source". you can view the source that way.
or, you can change the query so that source="path/to/source.log" to see all the log entries from that source file.
I have 5000 files in a folder and on daily basis new file keep loaded in same file. I need to get the latest file only on daily basis among all the files.
Will it be possible to achieve the scenario in Mule out of box.
Tried keeping file component inside Poll component( To make use of waterMark) but not working.
Is there any way we can achieve this. If not please suggest the best way ( Any possible links).
Mule Studio: 5.3, RunTime 3.7.2.
Thanks in advance
Short answer: Not really any extremely quick out of the box solution. But there are other ways. Im not saying this is the right or only way of solving it, but I've earlier implemented a similar scenario in this way:
A Normal File inbound with a database table as file-log. Each time a new file is processed, a component checks if its name appears in the table. By choice or filter I only continue if it isn't in there already - and after processing I add the filename to the table.
This is a quite "heavy" solution though. A simpler access would be to use an idempotent filter with a object store. For example a Redis server: https://github.com/mulesoft/redis-connector/blob/master/src/test/resources/redis-objectstore-tests-config.xml
It is actually very simple if your incoming file contains timestamp........you can configure the file inbound connector by setting file:filename-regex-filter pattern="myfilename_#[function:timestamp].csv". I hope this helps
May be you can use a quartz scheduler( mention the time in cron expression), followed by a groovy script in which you can start the file connector . Keep the file connector in another flow.
recently we managed to solved some data transferring problem by finding out there is additional .xsl we could use. Since .xsl files seems to be main way of controlling information flow in dcm4chee (beside jmx configurations ofc) im wondering whether there is some kind of list or index or something like that with enumerated all .xsl files one could use and their places in workflow.
I mean it would be nice to know exactly in which points we could have some influence on process.
I tried to google something like that but no success so far :/
Any help will be appreciated.
Some online list of demo .xsl files is available at
https://svn.code.sf.net/p/dcm4che/svn/dcm4chee/dcm4chee-arc/trunk/dcm4jboss-hl7/src/etc/conf/dcm4chee-hl7
Which one will be used is configurable through the jmx console and the configuration is then written back into the configuration files.
So searching for .xsl both as file extension and as full text search pattern in your dcm4chee installation directory will point you exactly to the places where the workflow can be influenced.
It is Java-style, Linux-style open source project so don't expect it to be too user friendly or Google friendly. I'm not aware of any easy to consume overview table/list/index but it would be nice to have one. It is open source so maybe you can add one..