Detect 360 degree video - 360-degrees

I want to Detect 360 degree video
I want to find if the video format is 360 degree and based on that manage the player supported the 360 degree
So anybody can help ?

It seems that the 360video/VR standards are still in flux. This article, http://labs.dash.umn.edu/etc-lab/detecting-spherical-media-files/, (dated April 2016) suggests that inspecting the EXIF properties of the photo/video might reveal a pattern in the metadata. There are libraries available to access EXIF data from files, such as this: https://github.com/exif-js/exif-js. PHP libraries exist as well. Phil Harvey's website ( http://www.sno.phy.queensu.ca/~phil/exiftool/) has a wealth of information and coding examples. There are compiled tools for WIN, MAC and NIX and a C++ library as well. From personal experience, EXIF values can be read from JPEG files. I have never tried accessing 360 JPEGs, but it is my understanding that the photo files produced from spherical cameras are nothing but regular JPEGs (although the images appear distorted when viewed without a 360 viewer). The EXIF tool that Phil Harvey has written shows support for mp4. You may be able to use the tool to analyze 360 videos to look for discernible patterns in the metadata.

Related

download open street maps' tiles.png

I'm trying to use offline open street map in a react native application, for that reason, and according to react native maps I need to store the tiles in a specific format :
The path template of the locally stored tiles. The patterns {x} {y} {z} will be replaced at runtime
For example, /storage/emulated/0/mytiles/{z}/{x}/{y}.png
I tried to download the tiles using tiles servers, however, I find out that It will take a lot of time (it is almost impossible) I also looked at the proposed ways to download tiles, however, I don't know the files extension and I don't know if I could convert one of them to png, therefore, I wonder I there is an opensource/free way to do that
I find also, this software but I can only use it up to zoom=13, otherwise its not for free.
Bulk downloads are usually forbidden. See the tile usage policy. Quoting the important parts:
OpenStreetMap’s own servers are run entirely on donated resources.
OpenStreetMap data is free for everyone to use. Our tile servers are not.
Bulk downloading is strongly discouraged. Do not download tiles unnecessarily.
In particular, downloading significant areas of tiles at zoom levels 17 and higher for offline or later usage is forbidden [...]
You can render your own raster tiles by installing a rendering software such as TileMill or by installing your own tile-server. Alternatively take a look at Commercial OSM software and services.
Alternatively switch to vector tiles. Obtaining raw OSM data is rather easy. Vector tiles allow you to render tiles on your device on the fly.

Why does coregraphics support PDF and why is the format so pivotal to Apple?

There are so many formats that intuitively are more bound to graphics: it could be vector graphics format such as SVG, various PNG, JPEG formats etc. Why PDF of them all has a dedicated library?
Note: this question is NOT opinion-based. I'm interested in the very reason for why PDF has such a central point in xOS. Give it a day to see.
My bonafides: I licensed my PDF viewer to Apple for Rhapsody and then the PDF library I wrote in Objective-C to Apple as the basis of CoreGraphics. I’ve written code for NeXT systems since 1988, and worked as a contractor for NeXT and Apple many times.
As Rob says, NeXTstep used Display Postscript (“DPS”), under license from Adobe. When Apple switched from “Classic” Mac OS to OS X, they absolutely didn’t want to keep paying Adobe the per-seat license for DPS each Mac it sold – it had been years since they’d agreed to pay a per-seat license to anyone for the Mac.
And, at the same time, DPS didn’t reflect modern graphics trends. It was a Turing-complete language and ran in its own process, and client apps would have to send inter-process messages to it, which meant expensive context switches and no direct sharing of buffers. (Also it broke down isolation between client processes.)
So the graphics team (led then by Peter Graffagnino) needed to replace DPS, but also not break all the existing NeXTstep apps. As it happened an engineer at Adobe (I believe it was originally Sam Streeper) had written a graphics language that was MUCH simpler than PostScript and wasn’t Turing complete, but still had most of the drawing conventions of PostScript. (Display PostScript was superset of PostScript, so any PostScript program runs in DPS.)
PDF was originally designed for PostScript-driven printers, because lots of PostScript programs (aka “print jobs”) ended up slurping down memory and processor time and often were just too big to work on any individual printer (RAM was expensive back then). Converting PostScript files to PDF made them process far more easily, and since PDF doesn’t have control flow you couldn’t, say, send the printer into an infinite loop. (Adobe processed PDF with a PostScript program they’d send to the printer as well.)
PDF was also an open standard, so with Apple writing their own implementation they didn’t have to pay Adobe anything to use it. PDF was fast and simple and free and had the exact same model because it was designed to run on PostScript printers.
So: Apple had to switch to a new graphics language for Mac OS X (née NeXTstep), and wanted one with the same semantics and Display PostScript so all their existing apps largely still worked.
So first they wrote the functional calls to CoreGraphics, implementing all the conventions of PostScript / PDF (eg: how clip paths work, how paths are filled if they self-overlap, resolution-independence, the state stack, etc). They rewrote all the existing high-level graphics functions (eg, NSRectFill()) so they were on top of CoreGraphics, so 99% of existing NeXTstep programs could switch from DPS to CG without modification. (By the late 1990s most NeXTstep programmers had realized dipping into DPS and trying to create effects by writing PostScript was pretty much always a bad idea, so we used the high-level calls.)
But the final missing piece was giving programs a format they could read and write to disk. In NeXTstep days we used both PostScript and Encapsulated PostScript, but the former was Right Out now. PDF was a natural fit (for the reasons outlined above) so they added reading and writing of PDF files.
Postscript (pun intended): The PDF code in CoreGraphics today is not what I licensed Apple — they used my code as more of a “working reference” and came up with their own APIs, since CoreGraphics couldn’t use Objective-C and my API was never really intended to be used by general programs.
The drawing model utilized by Quartz 2D is based on PDF specification.
It is widely stated that Quartz "uses PDF internally" (notably by Apple in their 2000 Macworld presentation and Quartz's early developer documentation), ... Quartz's internal imaging model correlates well with the PDF object graph, making it easy to output PDF to multiple devices.
https://en.wikipedia.org/wiki/Quartz_(graphics_layer)#Use_of_PDF
https://developer.apple.com/library/content/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_overview/dq_overview.html
Continuing Mahal Tertin's points, focusing on the drawing side, which isn't really PDF-centric but the history is interesting anyway (I'll get to the file format side below).
Quartz was developed before 1999, and was a fairly natural continuation of NeXT's use of Display Postscript and QuickDraw, which conceptually goes back to the Lisa and was used extensively on pre-OS X Macs.
SVG might have been a reasonable base, but work on it didn't even start until 1999. It was never in a position to the basis for OS X drawing. Even if it had come out a couple of years earlier, it would still have been an untested experimental format vs a established format that Apple already had extensive code and experience for.
When iOS came along, it inherited Core Graphics, which made development dramatically easier for OS X developers. Converting everything to an SVG DOM or the like would have made iOS much harder to migrate to. (I had complex custom views on OS X that I ported verbatim to iOS with the addition of one flip translation and a hacky #define to replace NSColor with UIColor.) It also made developing iOS much easier in the first place since Apple could reuse substantial existing code. Why replace the entire drawing system with another format? To what purpose?
Quartz/PDF/Postscript all have the concept of independent objects/layers that are composited together, and has lots of support for text and lines. This makes a lot of sense when you're building a windowing system and want to easily drag a window along the screen, overlapping other windows, and minimizing computation. PNG and JPEG are built to compress single, static images. They don't make any sense for this use. JPEG in particular would be a horrible way to try to manage a windowing system full of crisp, straight lines and text, exactly the things that JPEG hates.
So that's the drawing side (which isn't really PDF-centric anyway, but it answers why SVG DOM isn't the heart of Core Graphics), but why does PDF get its own framework and not PNG?
As a graphics format, PNG does get a lot of support in iOS (as opposed to OS X). iOS is highly optimized to display PNG and converting images to and from PNG is very easy in iOS. There really isn't a need for a PNGKit. What would it do beyond what's built into UIKit, where PNG and JPEG are given their own special handling right in UIImage (handling PDF doesn't get)?
PDFKit exists because PDF is complicated, and it addresses printing problems that the other formats don't (things like handling multiple pages; when was the last time someone mailed you an SVG Print document?) While it may seem that it's getting a lot of special support by getting its own framework, PDFKit isn't really very good and doesn't support a lot of more complicated PDF features. So while there is a long history of PDF inside of Core Graphics, PDF as a file format is actually pretty poorly supported by Apple IMO. (Compare PSPDFKit, which much more powerful, and a necessary addition for almost any app that uses PDF in a non-trivial way.)

SDK to Encode and Decode JPEG2000 images from C++ code

I am looking for an SDK for (roughly) the following capabilities regarding JPEG2000 files –
Decode and encode J2K files.
Decode to access individual elements (boxes, marker segments, image stream, etc.) of JPEG2000 images for inspection and potential alteration of texts and bits.
Encode (reconstruct) the JPEG2000 image with given elements.
This is all done from within C++ applications.
It must support 64-bit Redhat Linux OS.
It should be able to handle J2K (JPEG2000) files as large as 16GB i.e. 64-bit file address.
Please tell me of SDK's with the above capabilities that you know or have used in your projects. Also, hints on performance and licensing/pricing would be appreciated.
The best JPEG 2000 library is Kakadu: http://www.kakadusoftware.com/
No problems, Kakadu can handle raw codestreams (j2c) and file formatted codestreams (jp2)
Full codestream access and manipulation.
Not sure what you mean here, but if you mean to assemble components together or pieces of an image (i.e. like tiles.) yes, Kakadu is more than capable.
Yes, it's written in C++ so it will easily integrate with other C++ applications.
Yes, every major platform is supported.
Yes, 64bit addressing is supported.
Source: http://www.kakadusoftware.com/documents/Overview.txt
As for pricing and licensing model, it's a bit different for how you use it. Licenses start from $250USD for individual licenses, $500USD for evaluation licenses. See here for the most accurate details on licensing: http://www.kakadusoftware.com/index.php?option=com_content&task=blogcategory&id=6&Itemid=12
Kakadu is authored by David Taubman, one of the key contributors to the JPEG 2000 specification. If you need JPEG 2000 to do something more, he will be a great person to ask for help.
Another commercial library to do this is Accusoft Pictools.
We use it for Medical Imaging purposes.
It supports most known formats including jpeg2000 (.jp2).
https://www.accusoft.com/pictools.htm
Has complete libraries to be called from unmanaged code.
regards
Ari

H264 frame viewer

Do you know any application that will display me all the headers/parameters of a single H264 frame? I don't need to decode it, I just want to see how it is built up.
Three ways come to my mind (if you are looking for something free, otherwise google "h264 analysis" for paid options):
Download the h.264 parser from (from this thread # doom9 forums)
Download the h.264 reference software
libh264bitstream provides h.264 bitstream reading/writing
This should get you started. By the way, the h.264 bitstream is described in Annex. B. in the ITU specs.
I've created a Web version - https://mradionov.github.io/h264-bitstream-viewer/
Based on h264bitstream and inspired by H264Naked. Done by compiling h264bitstream into WebAssembly and building a simple UI on top of it. Output information for NAL units is taken from H264Naked at the moment. Also supports files of any size, just will take some time initially to load the file, but navigation throughout the stream should be seamless.
I had the same question. I tried h264 analysis, but it only supports windows. So I made a similar tool with Qt to support different platforms.Download H264Naked. This tool is essentially a wrapper around libh264bitstream

Mac OS X equivalent for DirectShow, GraphEdit

New to Mac OS X, familiar with Windows. Windows has DirectShow, a good number of built-in filters, COM programming, and GraphEdit for very fast prototyping and snooping on the graphs you've constructed in code.
I'm now about to go to the Mac to work with cameras, webcams, microphones, color spaces, files, splitting, synchronization, rendering, file reading, file saving, and many of things I've come to take for granted with DirecShow when putting together applications for live performance. On the Mac side, so far I've found ... nothing! Either I don't know where to look or I'm having the toughest time tying the Mac's reputation for its ease of handling media with a coherent programmatic ability to get in there and start messin' with media manipulatin' building blocks.
I've seen some weak suggestions to use gstreamer or some library for QT but I can't bring myself to believe that this is the Apple way to go. And I've come across some QuickTime documentation but I'm not looking to do transitions, sprites, broadcasting, ...
Having a brain trained on DirectShow means I don't even know how Apple thinks about providing DirectShow-like functionality. That means I don't know the right keywords and don't even know where to look. Books? Bought a few. Now I might be able to write some code that can edit your sister's wedding video (if I can't make decent headway on this topic I may next be asking what that'd be worth to you), but for identifying what filters are available and how to string them together ... nothing. Suggestions?
Video handling is going through a huge transition on the Mac at the moment. QuickTime is very old, but also big and powerful, so it's been undergoing an incremental replacement process for the past 5 years or so.
That said, QTKit is the QuickTime subset (capture, playback, format conversion and basic video editing) which is supported going forward. The legacy QuickTime APIs are still there for the moment, and probably will remain at least until its major features are available elsewhere, but are 32-bit only. For some involved video stuff you may end up needing to use it in places.
At the moment, iOS is ahead of the Mac because it could start from scratch with AV Foundation. The future of the Mac media frameworks will probably either be AV Foundation directly (with QTKit being a lightweight shim over the top) or an extension of QTKit that looks very similar.
For audio there's Core Audio which is on Mac and iOS and isn't going away any time soon. It's quite powerful but somewhat obtuse in places. Luckily online support is very good; the mailing list is an essential resource.
For filters and frame-level processing you've got Core Video as someone else mentioned, as well as Core Image. For motion graphics there's Quartz Composer which includes a graphical editor and a plugin architecture to add your own patches. For programmatic procedural animation and easily mixing rendering models (OpenGL, Quartz, video, etc.) there's Core Animation.
In addition to all of these, of course there's no reason you can't use open source libraries where the built-in stuff doesn't do what you want.
To address your comment below:
In QuickTime (and QTKit), individual data types like audio and video are represented as tracks. It may not be immediately clear that QuickTime can open audio as well as video file formats. A common way to combine audio and video would be:
Create a QTMovie with your video file.
Create a QTMovie with your audio file.
Take the QTTrack object representing the audio and add it to the QTMovie with the video in it.
Flatten the movie, so it doesn't simply contain a reference to the other movie but actually contains the audio data.
Write the movie to disk.
Here's an example from Blender. You'll see how the A/V muxing is done in the end_qt function. There's also some use of Core Audio in there (AudioConverter*). (There's some classic QuickTime export code in quicktime_export.c but it doesn't seem to do audio.)