Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What I am trying to accomplish is to take a video stream in through HDMI port on my macbook pro and display the video inside a NSView or window.
My end goal is to do something like this:
dvd player -- HDMI in --> computer -> apply overlay to video -- HDMI out --> tv
I am not sure if I will be able to stream the video out the other side so if I can simply get it to:
dvd player -- HDMI in --> computer -> apply overlay to video --> display in NSView or window
I will be satisfied. I have download the apple example code for applying an overlay to a quicktime video which is trivial, it is the input streaming part I am lost on. I'm guessing a way to say it would be that I need my computer act as a "pass through device"? for video. Not sure if this is a correct way to say that I'm trying to do. Any help or point in the right direction would be much appreciated!
Use AVFoundation which works exactly like it does on iOS.
Apple documentation will walk you through connecting AVCaptureDevice to the AVCaptureOutput (your AVPlayer hooked up to a UI).
https://developer.apple.com/library/mac/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/00_Introduction.html#//apple_ref/doc/uid/TP40010188-CH1-SW3
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have created a logo in illustrator. Artboard is 150x150, logo itself is 120x90.
I saved this logo as .ai, but I also want to use it on websites. I will include the image, and as you can see, it's a little bit blurry around the edges.
Logo:
I've tried multiple options: export for screens and changing the size (2x and 3x), save as and then png, save for web (legacy) and optimized for illustrations, ... but the borders are still blurry.
What's the ideal flow to save a .ai-file to a png-file without significant loss of quality?
Your image is fine. There is no way to make it better.
But if you're using Mac with retina monitor you can get blurry images in a browser. Browsers render web pages for non-retina monitors and they get upscaled on retina monitors. If there weren't special CSS tricks.
https://basilsalad.com/how-to/upgrade-website-images-retina-display/
CSS for high-resolution images on mobile and retina displays
Why do bitmap images look blurred on Retina display?
etc
If it's the case (Retina + browser) the only thing you can do is: to make the PNG with bigger size (about two times) and to tweak your CSS.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Bonjour ! this is my first question !
I'm trying some UI animation in Photoshop, and I want to export in GIF file. But, GIF can't support more than 256 colors right ? So my quality is really low...
I think I found a gif with 32697 colors on this website, so is it possible ?...
http://phil.ipal.org/tc.html
If someone can explain me how it's work... I'm pretty lost.
Thanks for your answer.
True-color GIFs are a hack that uses the animated GIF format, puts a 256 color square in each frame, offset from all previous frames, makes each frame not disappear when the next frame appears, and loops only once. Few programs other than web browsers display such images correctly. The files are very large as a result and not suitable for web use.
Try PNG instead.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am working on a sprite-kit game. I can not walk in the tunnel. There are 5 boxes beside the tunnel and they are reside in a same line. The water in the tunnel is moving. Man can push the boxes in any direction. If the man wanted to escape the tunnel then he has to use this boxes. So he has to push this boxes and put those into the tunnel and also move the boxes with the speed of water in the tunnel. And also if one box pushed into the tunnel then Man can walk on the box and also he can push another box over the before placed box in the river and only the box portion on the tunnel will be collision free.
Hope this image explain what I am trying to achieve.
You just need to give the collisionBitMask of the boxes' physicsBody to the player category and vice versa with the player's physicsBody. That way the player and the boxes will collide and the player's movement will affect that of the boxes.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
i want to change iOS6 application to iOS7.(i.e)i want iOS7 compatibility. I am using customised navigation bar and native tabbar in my application. When i run my application in iOS7 simulator,view is moving up. I have set the DELTA Y value to 20. After that also i unable to get the uiview and uiobjects in correct position.
Can someone help me?.
Help will be appreciated.
Thanks in advance.
You have to set Delta Y to -20 not 20! Also regarding the UIControls check autolayout constraints. If you set all this app properly, your app will play nice in both iOS 6 and 7.
Try Setting self.edgesForExtendedLayout = UIRectEdgeNone for iOS 7. If this doesn't work ,additionally set delta to 64 pixels as it seems you are using UINavigationController and for that you have to set it to 20+44 .
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I am fluent in C++, Java, and Python and can pretty much pick up any other skill given enough time (no surprise there, I'm sure 99.9% of the people reading this share the same ability).
I have an idea for a small app for Mac OS X and I was wondering what technology I should employ/learn to get it working. I need some minimal OS X integration to get this done right.
I'm thinking I should probably use objective-C with Cocoa, but if this could be done with some Java library I would prefer that.
My Mac OS X application would do the following:
Be able to intercept all keyboard and mouse input regardless of active (focused) application and select to either block it (effectively disabling input) or act on receipt of certain keyboard shortcuts.
Have a Mac OS X menu bar item (at the top right of the screen next to the battery, network adapter, etc.)
Be able to occupy the entire screen at times (with some OpenGL canvas to display animations, much like a screen saver does)
Have sound.
What technologies would you recommend?
My Mac OS X application would do the following:
Be able to intercept all keyboard and mouse input regardless of active (focused) application and select to either block it (effectively disabling input) or act on receipt of certain keyboard shortcuts.
CGEventTap.
Have a Mac OS X menu bar item (at the top right of the screen next to the battery, network adapter, etc.)
NSStatusItem.
Be able to occupy the entire screen at times (with some OpenGL canvas to display animations, much like a screen saver does)
Any NSView can do this, but for OpenGL, you'll want NSOpenGLView specifically.
Alternatively to the usual full-screen method, you might prefer to put the view in a window at the screen-saver level. Try both ways and see which works best for you.
Have sound.
NSSound.
If you are well versed in C based languages, Cocoa is an excellent place to start and would probably be the easiest for the tasks you describe. Start here for cocoa: http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/CocoaFundamentals/Introduction/Introduction.html
Python has some excellent support as well, you can look here for modules that may contain what you need: http://docs.python.org/library/mac.html
If you would prefer Java, here is where I would start looking for the functionality you need: http://developer.apple.com/mac/library/documentation/Java/Conceptual/Java14Development/05-CoreJavaAPIs/CoreJavaAPIs.html
I'm not sure honestly which to recommend, I can just say cocoa will probably have the best support for any integration you may need.
This started out as a comment but got too big.
I think you could do most of this with Java - menu icons, for instance, can be done through the SystemTray API which puts them in the relevant place on Windows or OS X. A previous specific answer on this : System Tray (Menu Extras) icon in Mac Os using Java
The key question is whether Java has APIs to grab 'raw' events from the OS, or only when focus is on the application. The standard KeyListener, for instance, is linked to a component with focus.
However, given the nature of the application, I'd suggest going with Cocoa. This would also allow you to use Core Animation (a higher level abstraction over Quartz / OpenGL).