Make labels look the same in Titanium - titanium

In Titanium I found that the labels in the Android version have quite different amount of padding to the iOS version.
How can I normalise them so they look the same in both?

Related

Image size changing from Expo to App Store

When running my application in Expo all images look fine and lovely, it's just when I deploy to the app store suddenly my images are huge.
I made sure that I follow Expo's standard on image spec as written here;
https://docs.expo.io/versions/latest/react-native/images/
Even weirder is the images look fine on iPhone X but the size is off on iPhone 7 (but on that same iPhone 7 in Expo everything looks ok). So just wondering if anyone has experienced any issues going from Expo to App Store with images and if they did how did they solve them?
Note: I am yet to deploy to Google play so haven't been able to see the effects there yet.
All input greatly appreciated
Just try to use WIDTH and HEIGHT from Dimensions
And then set width to your image by multiplication of WIDTH on some coefficient.

Does it matter if the app has always the highest density images

Im building multiple apps in React-native and I know that there its possible to put in multiple images that have a different density. React-native selects them automatically in the tag. I know for native Android and iOS they do the same.
I get it if you want to have some changes for lower density devices. Maybe a other design of a icon or something like that. But what if you have the 3x and then resize it to a 2x and 1x. Then you have the same image but with a smaller file.
Now my question is does it really matter to add the 2x and 1x if I already have the 3x? I don't see any performance of quality issues with that.
I guess (but not sure) it can give a RAM issues with large amount of images shows simultaneously.
Plus a famous android developer says that Android has an issue with downscale big images.

Image sizes for android and iOS in react-native

While making iOS Apps, we generally used to supply #x,#2x,#3x images. And based on my knowledge in case of android, there was some approx six different sizes
I have started working on react-native and came across the image issue.
My Question are: Do I need to provide images with all different sizes (i.e. approx 6-7 image sets by combining iOS and android) Or only 1 image and rest will be taken care internally? Will it look blurred on higher resolution phones?
Thanks.
You still need to provide multiple images. According to the Images documentation, if you are using an image named check.png, you also have to include check#2x.png and check#3x.png.
Quoting:
The packager will bundle and serve the image corresponding to device's
screen density. For example, check#2x.png, will be used on an iPhone
7, whilecheck#3x.png will be used on an iPhone 7 Plus or a Nexus 5. If
there is no image matching the screen density, the closest best option
will be selected.

Detect Winks in front facing camera using CIFaceFeature

I have an app which uses AVFoundation and tracks the face, eyes, and mouth position. I use the CIFaceFeature to detect these and mark them on the screen.
Is there a simple way to detect a wink using the framework?
For iOS 7, Yes, now you can do it with CoreImage.
Here is the API diff in iOS 7 Beta 2:
CoreImage
CIDetector.h
Added CIDetectorEyeBlink
Added CIDetectorSmile
Before iOS 7:
No, there is no way with iOS frameworks (AVFoundation or CoreImage) for now.
You can check out with OpenCV... but it's more of a researchy topic, not guarantee to work well in different situations:
First, you need to build a eye close/open classifier, afaik, there is no build-in eye wink classifier in OpenCV, so you need to collect enough "close" and "open" samples, and train a binary classifier. (I would suggest using Principle Component Analysis + Support Vector Machine. Both are available in OpenCV)
Then in iOS, use CoreImage to detect the locations of both eyes. And cut a square patch image around the eye center. The size of the patch should be normalized in terms of the detected face bounds rectangle.
And then you need to convert UIImage/CIImage to OpenCV IplImage or CvMat format, and feed them into your OpenCV classifier to determine the eyes are open or close.
Finally, determine if there is a wink based on the sequence of eye open and close.
(You also need to check if the processing frame rate is able to pick a wink action: say the wink happens within 0.5 frame... then you'll never detect it...)
It's a hard problem... otherwise Apple would have already included them in the framework.

shape recognition on IOS objective-c

what is the best library for shape recognition on IOS SDK ? is openCV working with IOS SDK ?
I want to recognize lines and squares and circles i don't need to recognize letters or numbers
Yes, OpenCV works fine with the official iOS SDK; it works best with Xcode (it is quite complicated to install it properly with a non-official toolchain, though).