Failed to read netcdf file. Help needed - arcgis

I have try my best to read this file using few softwares, Idrisi, ArcMap, Envi but failed. The only software that can read this data is Panoply at http://www.giss.nasa.gov/tools/panoply/
To my surprised, Panoply recognised that data as HDF version 5 rather than netcdf. I can view my data but could not extract specific 'layer' in the data. I then need to open the data in either ArcMap or Idrisi Taiga.
Anybody willing to help? The data can be access at https://docs.google.com/file/d/0BzzExM8ZYZwxdmI4bk5rSUw0VVE/edit?usp=sharing

It looks like the issue might be that the file is in netCDF-4 format (which is built on top of HDF5 - thus panoply's ID). In general, you cannot convert netCDF-4 into netCDF-3 unless some very specific constraints are met, as their data models are different (see http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#fv14 for more info). Luckily, your file is pretty simple and can be put into the netCDF-3 format using the following command:
nccopy -k classic tos_Omon_modmean_rcp26_00.nc tos_Omon_modmean_rcp26_00-nc3.nc
The new file will be in the netCDF-3 classic format, which will likely work with the tools you are using. If you need me to, I can post the converted file for you to download (if you do not have netCDF installed, and thus access to nccopy, on your system).
Cheers!
Sean

Related

Doing DSPSMTF to display a stmf on browser but it all junk and it is downlading the file instead of displaying it. Also any idea about CONTTYPES file?

I am using CGI DSPSTMF command to display stmf file on web browser. I am copying a spool file to a stmf file using CPYSPLF *STMF option. Once copied i am passing IFS location to DSPSTMF command but it is going to download automatically and when i open the download file i am getting all Junk data any idea why?
Also, i noticed it is using CONTTYPES file in CGILIB and on my server it is empty. What should be the values in it and what should i do show correct data instead of junk. I tried to use different methods to copy the file to IFS like used cpytostmf instead of cpysplf but on IFS file looks correct not the download version.
What CCSID is the resulting stream file tagged with?
use WRKLNK and option 8=Display attributes
If 65535, that tells the system the data is binary and it won't try to translate the EBCDIC to ASCII.
The correct fix is to properly configure your IBM i so that the stream file is tagged with it's correct CCSID.
Do a WRKSYSVAL QCCSID ... if your system is still set to 65535, that's the start of your problem. But this isn't programming related, you can try posting to Server Fault but you might get better responses on the Midrange mailing list

File sink in gnuradio

I am using USRP1 along with gnuradio. I want to store received data in a file using file sink. I would like to have an idea about the flow graph and with what extension I can store the file and how to read the data from the file. Thanks in advance.
Did you search online? Pretty common task that many have documented. For example, check out Dynamic file names in GNU Radio, which links back to a page of examples including writing I&Q to file 2

Can't migrate custom Plone file types to Blobs

We have custom content types that were created as extensions of the ATTypes, two of them extend the ATFile type and one extends the ATImage type. We recently upgraded from Plone 4.2 to Plone 4.3.2. Just discovered we are not using Blob storage at all. No wonder our Data.fs is HUGE. So, I have been trying to migrate these custom types.
I have followed all of the steps explained in this example and the product's notes from pypi, these Plone instructions, and used the example from the pypi page for archetypes.schemaextender (Sorry, since I'm still a noob my reputation won't let me post more than 2 links).
In the end, I created an extender script that just extends the ATFile type changing the FileField to BlobField. It seems to be working for new items. I can add a new CustomFileType and it appears to be uploading the file to blob, and my new upload field is showing (I changed the description as a quick way to verify which one it was using).
However, I am having a problem migrating all existing content items to move the binary files over to blob. I tried the generic migrate() script, then I created my own migrate and walker as suggested in the above resources. It doesn't seem like it is doing anything though. When printing results for each item it tries merging, I do see this returned for each item:
DEBUG ATCT.migration Migrating /site/path/to/custom/file/filename.ext (CustomFile -> Blob)
When I navigate to the custom file type in the site, where it usually shows the link to the file, it is just empty. Then going to edit, it treats it as if there is no file there. As a check, I disabled the extender, restarted, and reloaded the custom file. The file was there now. So it looks like the script I am running just isn't moving that file over to where it should be now.
I feel like I am missing something simple, and it is right there, but I can't seem to find it. All of this is learn as I go and a bit over my head, so hopefully someone can easily set me straight.
If I need to provide any additional information leave a comment and I will try to provide what you need.
UPDATE
I used the Red Turtle objects as examples to migrate my custom types as suggested by keul. I still was not able to get the file to migrate to blob within the type itself. So, I tried a different approach. I created a new custom type "CustomBlob", that is a mimic setup of my CustomFile type, and only extended this new blob type to be blob aware. Then I migrated the CustomFiles to CustomBlob, did a complete clear and rebuild, and packed the zeo. The migration seemed to work for the most part, the blobstorage grew by an expected amount, the new types worked. However, the Data.fs didn't go down in size. I would have thought that the binary files that were stored in Data.fs would be removed during the migration. Am I understanding this incorrectly? How can I remove these files so the Data.fs size goes down appropriately?
Not sure if this is the best solution, but here is how I was able to get this to work.
I created temporary content types parallel of each type (for CustomImage I made CustomImageBlob, and so on). I made the new types blob-aware only, migrated all types to their parallel. Then I enabled the extender for the original types to make them blob-aware, and migrated back. It is a little redundant and time consuming, but I just could not get the files to migrate to blob when migrating to itself.
Providing this as the best answer so far in case it helps someone else, or might encourage someone to find a better solution. Thanks for the tip keul, it definitely helped me get to this solution.

Any way to automate the process of opening a .mpp file and saving it as a .csv?

I need to find a way to automate the process when a user uploads a microsoft project file to a web application I already have created. The process will need to basically use the save as from project to save into a .csv file so I can use this to import the data to an SQL database (this is needed for custom reporting we already have set up using SQL). I need to automate this process because I will be receiving tons of project files, and if the process is automated the users will then be able to instantly see results.
Basically, is there any way to create or run an automated process that will save these project files as .csv files? Even if the csv files are not formatted correctly, I can find a way around that, just need to first get them into .csv files.
Thank you.
edit - the only way i could think of this is to follow the instructions listed below, but
I would then need to automate a process to open the file and hit save so this works... any other suggestions?
http://social.technet.microsoft.com/Forums/en-US/projectprofessional2010general/thread/eea4ca15-0a0b-4c07-9989-87536b961385/
edit 2 - also looking into ways using Microsoft.Office.Interop.MSProject but not finding any luck.
edit 3 0 now using mpxj - the only issue I am having is the following listed below. Converting their example to vb.
Private Shared Function ToEnumerable(ByVal javaCollection As Collection) As EnumerableCollection
Return New EnumerableCollection(javaCollection)
End Function
the error is with EnumberableCollection - visual studio is not picking it up as a valid type - anything I am doing wrong or should substitute?
If you aren't wedded to using MS Project itself to extract data from the project files, you could consider using the MPXJ library. This would allow you to write a simple utility to open the MPP files you are given, extract the data items you are interested in, and write them directly to your database (or an intermediate CSV file, as required). MPXJ comes in Java and .Net flavours, so you can use your preferred language to do the work.
Jon
p.s. Disclaimer: I maintain MPXJ

How to create a fixed blocked (FB) file for IBM mainframe/FTP in VBA

I've got VBA code that generates a text file with some pretty basic information included. I then upload that file via FTP.
I got a message from the server admin of the IBM mainframe today that my file was in variable blocking (VB) format and their job process uses a fixed blocking (FB) up to a max size of 256.
How is this done? During the file creation? 3rd party tool?
B
You can simply convert the VB file into FB in mainframe before running the actual process.VB to FB conversion JCL is a small JCL step to do your conversion
You can use Locsite to set the record format on the host dataset(File).
You can find all the list of FTP sub commands in the below user guide
IP User’s Guide and Commands SC31-8780-05
Sorry all, I have a feeling I didn't explain this correctly, because I do now have an answer which is rather simple. These 2 commands seemed to have setup the environment correctly for the file to be fb and not vb.
ftp> quote site lr=94
ftp> quote site rec=fb
If I rightly remember FB is in multiples of the block sizes, that is just how DASD stores the files on disk, it must fit in that multiple block size, which increases speed and throughput on the Mainframe. If the data file is not within the boundary of multiple block sizes (This has nothing to do with the actual size of the data), the DASD system just access files in blocks of 256 bytes...there will be a host of special fields inserted into the data file to describe the blocking and so on...which will get inserted when transferred to the mainframe and that data gets transferred to magnetic tape backups...
There should be a script available on the Mainframe to convert it using JCL (Job Control Language)..ask the Mainframe administrator to do it for you...
By the way it should be noted that the character set you used in your data file, just be aware that the mainframe uses EBCDIC character set...There are plenty of tools out there that can convert from ASCII data to the format to be readable by the mainframe, just something to bear in mind of...If the data gets converted that could impact the file size...Thought it would be worth mentioning and important!
There is a Unix/Linux utility that can convert the data to a fixed block size using the dd utility, although I do not think it would be the right way to do it...
Here's a useful link that will help you in understanding this. And also here on SO a similar user was asking about MVS/TSO data...