create a geotiff file from undocumented tif image - gdal

I have an undocumented tiff image which I need to use with a software that can read only geotif files. my simplest idea was to pretend the image is at 0N, 0W with a pixel size of 0.00000899928° (1m) in both directions.
I have rea the thread here but I was unable to reproduce the answer.
Thanks for helping. I am a dummy in geodesics, GIS and the like.

You are attempting to georeference a raster, which is often a difficult task, with multiple techniques. It's not possible to provide an answer for your question given the information you have supplied. Also, never assume that lengths in degrees can be converted to lengths in metres (the Earth isn't flat).
Search around GIS.SE for ideas , e.g. using the [georeferencing] tag. There are tools available with QGIS to help manually georeference rasters to other geospatial data.

Related

Change Ghostscript dithering method when converting pdf to 256 color BMP

I am trying to produce some high quality 8bpp bmp from pdf file with ghostscript. For that purpose, I use the bmp256 device.
So far, everything works well and is really fast, but ghostscript use halftoning to dither the image, leading to some uggly patterns when zooming on the picture :
I've managed to reduce their size by playing with the -dDITHERPPI flag, but this is still not satisfying enough. Those are too regular and are too easily spotted, even with little zoom.
Instead of using halftone, I would like to use some error diffusion algorithm, like the Floyd–Steinberg one. I found this algorithm is implemented on other devices, but they are all printer related devices, so I can't really use them.
Plus, I need to be as fast as possible when converting the PDF to 8bpp BMP, and the outputed pictures are very large: so converting it to 24 or 32bpp BMP in the first place to dither it later with another tool is excluded.
I already downloaded the source to try to implement it myself, but the project is really big and complex and I don't know how and where to start.
Is there any way to use some error diffusion algorithm with ghostscript without having to implement it myself ?
If no, is there a prefered way for extending ghostscript ? Any guideline ?

What tools are commonly used to visualize meteorological and climatological data?

I am interested in visualizing meteorological and climatological data.
Here we are talking about 2D/3D visualization for weather and climate elements:
Temperature
Pressure
Wind
Example
We have used some tools previously, such as:
GrADS
Surfer (commercial software)
GIS Meteo (commercial software)
What another tools (preferably open source) would you suggest for that purpose nowadays?
I know you mentioned GrADS, but it was the tool I used mostly for development of weather products, a little more intuitive and resource friendly than IDV when I coded, and generally pretty good rate of development. You mentioned Open Source... did you know there is an OpenGrADS (http://opengrads.org/)? Most friends involved in weather product development use a combination of GrADS\OpenGrADS for much of their work. But I agree it doesn't produce knock-your-socks-off graphics.
Another commonly used free program is Gempak, another Unidata product, which really seems to be becoming outdated in my personal opinion).
And then you can talk high end graphics, you're going to pay more. http://moe.met.fsu.edu/~hrw22/movies/WIND_Katrina_2005-08-28_00Z.gif is a great video of Katrina that was produced by someone I knew using Amira. According to Wikipedia, you're looking at
"Cost: $4,000 USD + $800/year support (2009)... although now has much more ugly/complex pricing structure where each feature is priced separately (eg: Amira Mesh Option $360). I believe at NCMIR we pay ~$9000/year for five user-license." Ouch!
I don't have an open source tool, but if you can get access to a Level-II data feed (Level-II is minimally post processed radar data), I and a meteorologist friend use GR2Analyst. I would assume you know enough about weather sources to be able to figure out how to set this up.
If you're looking for an open source (and free) tool that can do 2D and 3D, which also includes access to a wide variety of datasets (obs, model output, remote sensing - radar level 2 and 3, satellite, and more!), then you might want to check out the Unidata Integrated Data Viewer (IDV):
http://www.unidata.ucar.edu/software/idv/
Source code available here:
https://github.com/Unidata/IDV
The interface is a bit complex, but we have some youtube screencasts to help people get up and going:
http://www.youtube.com/user/unidatanews/videos
If you'd like to see a video for a specific thing, we are taking requests :-) (email support-idv#unidata.ucar.edu). We do yearly training workshops as well, and those materials are available online here:
http://www.unidata.ucar.edu/software/idv/docs/workshop/
Cheers!
Sean
Panoply is a multiplataform desktop option if data is available in formats such NetCDF, HDF or GRIB.
I extracted the following text from his site that describes some of the characteristics:
Slice and plot geo-gridded latitude-longitude, latitude-vertical, longitude-vertical, or time-latitude arrays from larger multidimensional variables.
Slice and plot "generic" 2D arrays from larger multidimensional variables.
Slice 1D arrays from larger multidimensional variables and create line plots.
Combine two geo-gridded arrays in one plot by differencing, summing or averaging.
Plot lon-lat data on a global or regional map using any of over 100 map projections or make a zonal average line plot.
Overlay continent outlines or masks on lon-lat map plots.
Use any of numerous color tables for the scale colorbar, or apply your own custom ACT, CPT, or RGB color table.
Save plots to disk GIF, JPEG, PNG or TIFF bitmap images or as PDF or PostScript graphics files.
Export lon-lat map plots in KMZ format.
Export animations as AVI or MOV video or as a collection of invididual frame images.
Explore remote THREDDS and OpenDAP catalogs and open datasets served from them.
If you are interested in interactive visualization over web, there are some options such as:
ncWMS: an webmapping server that reads NetCDF data and publish it using Web Mapping Service standard.
GeoServer: another webmapping server that has plugin to read NetCDF data.
Vtk (visualization Toolkit) is a C++ open source 2D and 3D visualization library that I use to visualize radar data in 3D.

jpeg2000 file structure viewer

For an application we are developing it is important for us to know how much data (in bytes) is stored in a JPEG2000 code stream for each resolution and quality layer. Does anybody know an application / library that can easily reveal this information?
Take a look at the free JP2 Metadata Editor:
http://j2k-codec.com/mde.html
It doesn't show you data by resolution but at least it shows you the compressed codestream size for each tile. Maybe it will be helpful to you.
I have been using jpylyzer and/or pirl (jp2info) in those cases.

Reducing the size of pdf generated from software using proprietary fonts

I am trying to bring an Indian Magazine online. This magazine is typed in CorelDraw using the proprietary Devenagari font (http://www.modular-infotech.com/html/shreelipi.html). So these guys have provided a USB dongle that you have to have attached to the machine when you want to access the fonts, and this software has been in use for past 10 years.
To put the magazine online, we've tried to convert it to pdf (by printing). The resultant pdf size is of the order of 30-50MB, even when the pdf does not have even a single image. I am guessing it converts the whole text into an image
It would be really difficult for users to read this magazine given its size. Though when I convert it to .swf format (for add flipbook kind of functionality) - the size reduces to 5-6MB. But there are people who like to download the magazine and then read. I have had no luck reducing the size of pdf.
I have done lot of research on web. The postscript, primo pdf do not help much. The best I could get was 30% reduction using DocuCom pdf printer. But it is still 20MB. I have tried to play with resolution, compression and quality but the best I could get was 18MB.
Ideally I would like to reduce it to less than 2MB.
I would be really grateful if you could help me reduce the size of the pdf! Considering that it has no images, I am hopeful that I can get some really good compression.
The (35MB) magazine can be downloaded from: http://merajhola.in/jin-march.pdf
I can't see any easy way to reduce the size of this PDF. There are no embedded fonts and all the text is drawn using vector graphics primitives. No amount of tweaking the resolution, compression and quality will have a significant improvement.
One possible option would be to embed the font as a subset rather than use vector graphics. That will almost certainly make a big difference, however I doubt the proprietary font license will allow it.
I'm sorry, but this Shree-Lipi thing just sounds wrong in 2012. It would be much better to use proper OpenType fonts with modern (say InDesign) or free (say LuaTeX) software.

Good library for Digital watermarking

Can somebody help me, to find a library, or a detailed description of algorithm, that could embed a Digital watermark(invisible watermark, just a kind of steganography) to a jpeg/png file. But the quality of algorithm, should be great. It should be possible to extract this mark after rotation and expansion(if possible) of image.
Mark is just a key 32bytes.
I found a good site, but the algorithm are made for the NetPBM format, that is dead...
I know that there is a LSB method, but it is not stable to the expansion. Are there something better?
Changing metadata, is not suitable, because it is visible changes.
This maybe won't really be an answer, as I don't think it would be easy to give a magical, precise answer on this question.Watermarking is complex, and the best way to do it is by yourself : this will make things more hard for an attacker trying to reverse engineer your code. One could even read your question here, guess what library you used, and attack your system more easily.
Making Steganography resist to expansion in JPEG images is also very hard, because the JPEG compression is reapplied after the expansion. There are in fact a bunch of JPEG steganography algorithms. Which one you should use, depends on what exactly do you require :
Data confidentiality ?
Message presence confidentiality ?
Message coherence after JPEG changes ?
Resistance to "Known Cover" attacks (when attackers try to find the message, based on the steganographic system) ?
Resistance to "Known Message" attacks (when attackers try to find the steganographic system used, based on the message) ?
From what I know, usually, algorithm that resist to JPEG changes (picture recompression) are often really easier to attack, whereas algorithms that run the "encode" stage during the JPEG compression (after the DCT (lossy) transform, and before the Huffmann (non-lossy) transform) are more prone to resist.
Also, one key factor about steganography is scale : if you have only 32bytes of data to encode in a, say, 256*256px image, don't use an algo that can encode 512bytes of data in the same size. Either use a scalable algorithm, either use an algorithm at its efficient scale.
Also, the best way to do good steganography is to know its limitations,and to know how steganalyzers work. Try these tools, so you can understand what attackers will do to your picture.^
Now, I cannot tell you what steganographic system will be the best for you, but I can give you some indications :
jSteg - Quite old, I don't think it will resist to JPEG changes
OutGuess - Quite old too, but one of the best algorithms
F5 (and F3/F4) - More recent, good algorithm, scientifical research behind.
stegHide
I think all of these are LSB based : the encoding is done during the JPEG compression, after the DCT and Quantization. The only non LSB-based steganography system I heard of was mentionned in this research paper, however, I did not read it to the end yet, so I cannot tell if this will meet your needs.
However, I'm not sure there exists a real steganography algorithm resisting to JPEG compression, to JPEG resize and rotation, resisting to visual and statisticals attacks. Or I'm not aware of it.
Sorry for the lack of precise answer, I tried to give you what I know on the subject, as it's always better to be more informed. Sorry also for the lack of proper English, I'm French, nobody's perfect :)
Pistache is right in what he told you regarding the watermarking implementation algorithms. I will try to help you by showing one algorithm for the given requirements.
Before explaining you the algorithms first I guess that the distinction between the JPG and PNG formats should be done.
JPEG is a lossy format, i.e, the images are susceptible to compression that could remove the watermark. When you open an image for manipulation purposes and you save it, upon the writing procedure, a compression is made by using DCT filtering that removes some important components of the image.
On the other hand, PNG format is lossless, and that means that images are not susceptible to such kind of compression when stored after manipulation.
As a matter of fact, JPEG is used as a watermarking scheme attack due to its compressing characteristic that could remove the watermark if an attacker performed the compression.
Now that you know the difference between both formats, I can tell you a suitable algorithm resistant to the attacks that you mentioned.
Regarding methods to embed a watermark message for PNG files you can use the histogram embedding method. The histogram embedding method changes values on the histogram by changing the values of the neighbor bins. For example imagine that you have a PNG image in grayscale.
Therefore, you'll have only one channel for embedding and that means that you have one histogram with 256 bins. By selecting the neighbor bins x and x+1, you change the values of x and x+1 by moving the pixels with the bright x to x+1 or the other way around, so that (x/(x+1))>T for embedding a '1' or ((x+1)/x)>T for embedding a '0'.
You can repeat the same procedure for the whole histogram length and therefore you can embed in the best case up to 128bits. However this payload is less than what you asked. Therefore I suggest you to split the image into parts, for example blocks, and if you split one image into 4 components you'd be able to embed in the best case up to
512 bits which means 64 bytes.
This method although is very, susceptible to filtering and compression if applied straight in the space domain. Therefore, I suggest you to compute before the DWT of the image and use its low-frequency sub-band. This will provide you better transparency and robustness increased for the warping, resizing etc attacks and compression or filtering as well.
There are other approaches such as LPM (Log Polar Maps) but they are very complex to implement and I think for your case this approach would be fine.
I can suggest you two papers, the first is:
Watermarking digital image and video data. A state-of-the-art overview
This paper will give you some basic notions of watermarking and explain more in detail the LSB algorithm. And the second paper is:
Real-Time Compressed- Domain Video Watermarking Resistance to Geometric Distortions
This paper will explain the algorithm that I just explained now.
Cheers,
I do not know if you are considering approaches different to steganography. Instead of storing data hidden in the pixel data you could create a new data block in the JPEG file and store encripted data.
Take a look at the JPEG file structure on Wikipedia
You can create an application specific data block, using the marker 0xFF 0xEn. Doing so, any change in the image pixels do not change the information stored in the image. Moreover, many image editing software respect custom data blocks and will keep them even after image manipulation.