We are using Panoramic Capture plugin to capture stereo pictures of our product, but it doesn't handle our Material post process effect.
We read from https://www.unrealengine.com/en-US/tech-blog/capturing-stereoscopic-360-screenshots-videos-movies-unreal-engine-4 : "Note: You may need to force the capture component to have a ViewState for certain post effects (e.g. material effects) to work" but doesn't understand how this works.
What is this View State? How to setup the ComponentCapture View State to correctly render the Post effect Material?
Thanks for any help.
Related
Our company is addressing gaps in accessibility on the Native side of our app. We get a general diagnostic indicating lack of zooming on text on the Native app. The ticket looks like this:
1.4.4 Resize text: Except for captions and images of text, text can be resized without assistive technology up to 200 percent without loss of content or functionality. (Level AA)
Zooming and scaling should not be disabled.(meta[name="viewport"]).Fix the following: user-scalable on <meta> tag disables zooming on mobile devices
We use react-native-web which allows for both web and native in one codebase. Looking at the top HTML file, I don't see anything indicating user-scalable is disabled.
In order to be able to use pinch gesture to zoom in on text, do I need to go through every text component and add a prop to enable this? Don't want to use a jack hammer for a nail but we might have to if this is what's required. I would think it would be automatic.
Your ticket seems to come from an automatic test, by axe-core : https://dequeuniversity.com/rules/axe/4.4/meta-viewport
Unlike what your ticket said, it's not a WCAG 1.4.4 failure but an accessibility best practice for the web.
If you don't have user-scalable in your HTML file, maybe the automatic test did not run correctly ?
Anyway, to manually check if your mobile app is WCAG 1.4.4 compliant, you have to increase the font size to 200% in the accessibility settings of your phone.
You absolutely do NOT need to go through every text component.
The error is accurate. You have disabled some resizing capabilities in your meta tags.
Just check the head of your document for a meta tag with the attribute name="viewport". You will probably find it has an attribute of content set to either "width=device-width, user-scalable=no" or "width=device-width, user-scalable=0". Reset the content attribute to "width=device-width, initial-scale=1". You should be all set.
I am using the upload image widget without success.
1) result.info.path returns invalid url.
2) There is no preview of the uploaded images due to no.1
3) No images are were uploaded to my media folder at Cloudinary.
Fiddle:
https://jsfiddle.net/7uqb83t1/
These are my preset settings:
Can someone share a working version of this widget + preset settings?
On successful upload, you need to check result.info.secure_url for a link to the asset. Currently, in your preset, you're using Async which means the incoming transformation is performed in the background (asynchronously), and as such, you will get a pending result. Async assumes you're using a Notification URL as a webhook where you'll receive the Upload API response when the processing is complete. In your case, I'd recommend turning the Async off.
Also, your incoming transformation configured in the preset is not valid and because of that, you will be getting an error on upload. Please console.log this in your JSFiddle to see it. Essentially, it'll be -
Auto gravity can only be used with crop, fill, lfill, fill_pad or
thumb
'auto' gravity (g_auto) implies cropping (automatically selecting the most interesting part of the image to focus on) and therefore you need to use an appropriate crop mode. 'scale' keeps all image data and no cropping is made so that is why g_auto can't work with it. Please see the following section of the documentation for details on the different crop modes - https://cloudinary.com/documentation/image_transformation_reference#crop_parameter - which will help you decide which one you want to use.
Lastly, you should also consider updating your incoming transformation so that it only resizes once, since currently, resizing it three times with the same crop mode is redundant. For example, you can use c_scale,q_auto,w_687 only, or if you want with 'auto' gravity you can try c_fill,g_auto,q_auto,w_687.
I am currently testing a DSC-HX400. While I am able to do almost everything I need to with the camera there are a couple of items that are not exposed via the API that have frustrated my efforts.
1) The camera does not seem to offer an option, via the API or the camera itself, to capture images in RAW format. It does offer standard & fine JPEG format but both of those are leaving artifacts in the image that become extremely noticeable when you zoom in with an image editor. Is there a way to get the camera to capture RAW images? I do not need the SDK to return the data just to save it out to the card. If getting the RAW data is impossible has anyone found an inventive way to clean up the artifacts?
2) The camera supports both still shoot and movie mode but the API will only expose the mode that I am currently in. It makes it impossible to transition between still to movie mode (to allow recording) from the API but I can do that same transition by pressing a single button on the camera. Once I am recording a movie the API will allow me to transition back to still mode (by cancelling recording). Is there plans to support the ability to trigger a movie recording via the API if you are in a still capture mode (Seeing the firmware already supports this functionality)?
Answers to the questions below:
If the camera cannot capture RAW images, the API will not be able to either. I do not know of a way to capture RAW images but can only comment with regards to the API as I am not an expert on usage of the camera itself.
You can change between still and movie mode by using the "setShootMode" API.
I am trying to create an interface that is similar to the interface on this website for the skill tree: http://www.pathofexile.com/passive-skill-tree. What is the best way to go about doing this and have the same or similar user interaction. ie. you click on a node and it activate or deactivates it. The movement of the tree and zooming on it would be nice as well. Would like to try to stay away from webView as I am thinking about features I want to add. Thanks in advance just want to see what a good way to do this is.
you can use webView and have almost a copy-paste of the presented webpage html source and load it.
However with native components you can have better performance, but it will "not a copy"
Native componets:
IIViewDeckController for iOS
iHasApp for iOS
iHasApp for iOS
There are more on that side. Consider a combination of they,
So I'm just getting used to and getting my arms around the new "panel-based" App scheme released with the 5/5/2012 version of Rally. At first it was a bit frustrating to lose the window real estate when I've been accustomed to full-page iFrames.
I am curious however - from a desire to optimize the way I use real estate onscreen for an App page - I would like to setup and utilize a multi-panel App whose components can communicate. For instance, I'd like to have one App panel display some control widgets and perhaps an AppSDK table, and a second App panel display a chart or grid that responds to events/controls in the first panel.
I've been scanning the AppSDK docs for hints as to how this might be accomplished, but I'm coming up short. Is there a way to wire up event listeners in one App panel that respond to widget controls in another?
We have not decided the best way to have the Apps communicate yet. That is something we are still spiking out internally to find the best way to do it.
Each custom App is in an IFrame so figuring out how to make them communicate can be a bit tricky. Once we figure out a good way to do it we will be sure to let you know.
Has this topic, "app Communication", been addressed yet? I would to have one Custom Grid show User Stories. When a user story is selected another grid show the related tasks.