Exporting ERwin models to XML or .erwin files programmatically - api

I have a requirement to programmatically export models in ERwin data modeler. The exported files could be saved in a directory (on server or local machine). We also want the process to export only the models that were changed after previous export.
Anybody know how to do that?
Thanks in advance,
Vivek

The Erwin API can be used to programatically access the model.
you can write a program to step through the model, extract and format the information you want to export.
One of the model properties is the date the model was last updated. If your export program saves the date that it was last run, you could compare the 2.

Related

Best approach for this data pipeline?

I need to design a pipeline using Nifi, but I have some questions as I am thinking between two approaches and I am unsure which processors to use, so maybe you can help me.
The scenario is the following: I need to ingest some .csv files into my HDFS, those do not contain a date I want to use to partition the Hive tables I will later use, so I thought of two options:
At some point during the .csv treatment, create some kind of code snippet that is launched from Nifi to modify the .csv file adding the column with the date.
Create a temporary (internal?) table on hive, alter the table adding the column and finally add it to the table where I partition by date.
I am unsure which option is better (memory-wise, simplicity, resource management) or maybe if its even possible, or even if there is a better way to do it. Also I am unsure of which are the Nifi processors to use.
So any help is appreciated guys, thanks.
You should be able to do #1 easily in NiFi without writing any code :)
The steps would be something like this:
Source processor to get your CSV from somewhere, probably GetFile
UpdateAttribute to add an attribute for the current date
UpdateRecord with a CsvReader and CsvWriter, adds a new date field
with the value from #2
I've created an example of how to do this and posted the template here:
https://gist.githubusercontent.com/bbende/113f8fa44250c09a5282d04ee600cd09/raw/c6fe8b1b9f31bb106f9c816e4fd5ea90ebe19f80/CsvAddDate.xml
Save that xml file and use the palette on the left of NiFi canvas to upload it as a template. Then instantiate the template from the top toolbar by dragging on the template icon.

Add existing .pdf to report?

Is possible add existing pdf file to ActiveReports 6 report?
We have two application.
First application create report and save it as pdf in shared folder.
Second application create own report and if report from first application exists - user want that report of first application will be added to report of second application.
Applications have different database. So regenerating first report not a solution for this case.
pdf combining is one workaround - which can be used if no solutions will be found.
You can do what you're describing with the rdf file format 'activereports' built in format. Not with pdf. However, once the second report is generated and the two documents ar combined you can export them to pdf
So far answer for question is No, it is not possible.
From Internet found only that in DataDynamic(ActiveReports is/was a part of DataDynamics) reports is property
"append existing .pdf to report"
But ActiveReports is another product, which haven't this possibilities.
In my case we decide to use #Issam workaround(save .rdf file to another folder and use it later with another report).
I accepted own answer, only because the reason, question was created, was to check if this kind of possibility exists for ActiveReports

Failed to read netcdf file. Help needed

I have try my best to read this file using few softwares, Idrisi, ArcMap, Envi but failed. The only software that can read this data is Panoply at http://www.giss.nasa.gov/tools/panoply/
To my surprised, Panoply recognised that data as HDF version 5 rather than netcdf. I can view my data but could not extract specific 'layer' in the data. I then need to open the data in either ArcMap or Idrisi Taiga.
Anybody willing to help? The data can be access at https://docs.google.com/file/d/0BzzExM8ZYZwxdmI4bk5rSUw0VVE/edit?usp=sharing
It looks like the issue might be that the file is in netCDF-4 format (which is built on top of HDF5 - thus panoply's ID). In general, you cannot convert netCDF-4 into netCDF-3 unless some very specific constraints are met, as their data models are different (see http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#fv14 for more info). Luckily, your file is pretty simple and can be put into the netCDF-3 format using the following command:
nccopy -k classic tos_Omon_modmean_rcp26_00.nc tos_Omon_modmean_rcp26_00-nc3.nc
The new file will be in the netCDF-3 classic format, which will likely work with the tools you are using. If you need me to, I can post the converted file for you to download (if you do not have netCDF installed, and thus access to nccopy, on your system).
Cheers!
Sean

How to massively load attachment files to OpenERP

I have on a disk PDF files of previous employee payslips. I would like to load them to OpenERP, as attachments to each employee. Furthermore, I would like to attach more files every month.
To achieve this I want to write a specific module/addon, or a Python program accessing OpenERp through XML-RPC.
How should I interact with the employee model to programmatically attach a file? Is there an ORM method that can be used for this?
If you want to more advanced things with attachments, I suggest you check out the document module, as well as the document_ftp module. They store the attachments in the file system, so you can just copy the files in instead of going through the API. We used the FTP server in OpenERP 5.0, and it worked well for attaching large numbers of files.
Be careful, though. When you install these modules, I think you lose all current attachments. You'll have to migrate those attachments to the file system somehow.
Use openerp import / export feature.
Make csv file and then do import.

Can I let Core Data use an already created sql database

I have a mosques database that has 1000 items ..
I want to use Core Data approach in accessing my database ..
I have already tried the SQLite approach to create the database
where I have a text Pad file with all data seperated by tabs
and then Import the data from txt file to sql file ..
Now this works fine ..
I want to know how can I import data from my SQL file to the newly created Core Data Project
Shall I add SQL file to resources ??
Copy it or not ??
I have looked at CoreDataBooks example but I think I'm missing something
I want to know the exact way for adding an SQL file to the resources of a Core Data Project ..
You can't.
You should regard the fact that Core Data uses SQLite as the format to save the file as an implementation detail, not to be used directly unless you really, really, really need to do that. For example, you can't expect Core Data to work alright if you also directly writes on to the SQLite file.
Instead, read the Core Data documentation, and import the data directly from the tab-separated text file to the Core Data context, and let the Core Data save it to the file. Yes it does use SQLite behind the scenes, but it's better for you to forget that fact.
Yuji and Dave DeLong are right on both accounts, however I feel like I should add that just because you can't realistically feed CoreData a pre-populated SQLite file doesn't mean you can't bootstrap your CoreData store from a SQLite file (or a text file or anything else.) It just means that you have to do the work yourself.
For instance, you could include your pre-populated SQLite file (with it's own, non-CoreData schema, etc) as a resource in the project. Then when your app starts up, if it sees that the CoreData store is empty, you can use the SQLite API directly to open/query your bootstrapping database and translate the results into operations that generate the desired object graph in CoreData. The next time the app starts up, the CoreData object graph will be populated, and you won't have to do it again.
The take away here is that while it's not "free," it's not "impossible." Many, many apps include built-in CoreData repositories that contain data. That data had to be bootstrapped from somewhere, right?