I just got a new gig with a startup, they have a design studio that creates mock ups in photoshop and then sends them to me ( I am the UX designer ). Now they started talking to me about a process of defining how many pixels are needed for the dimensions of every png and jpeg and all the other mockups and installing photoshop on my machine, so I can figure out the dimensions when I open the psd files.
To me it sounds normal for the design studio to give me some assets and provide a file with every asset and its dimensions ( as in, this is an icon, size is (46x80), as opposed to me opening the asset in psd and figuring that out myself.
I was wondering what do other companies do? What is the process in place between the mock up design studio and the actual UX programmer who translates those assets into actual screens?
Thanks.
There is no "standard". It's generally best to let the designers provide individual files since they're the Photoshop experts and may be tweaking the images over time. You may have to provide them a list of files with descriptions, format, size, and variations (enabled, disabled, active). We ask for retina sizes and then have a tool to generate non-retina ones.
We like to use Google spreadsheets for the list of files and DropBox for the actual transfer.
Having said that, you should have Photoshop and learn to use it because there will be times where a graphic needs a tweak and you don't want to wait on someone else.
So who is actually designing the experience? Is it your job to code it? Or, are you the guy that is supposed to come up with the wireframes and then them do the visual design according to that?
With the company I work at (and in my freelance work) I'm rarely given the exact sizing of any assets, unless there's a specific requirement.
For the most part, when building a new site (or amending an existing one), I don't find out the sizes of anything until I open up Photoshop and start cropping.
Related
So i was thinking is there a way i can program an AI that reads something(numbers mostly, in a font and a particular area on the screen that i will specify) and then perform some clicks on the screen according to what it read...the data(numbers) will change constantly and the AI will have to look out for these changes and act accordingly. I am not asking exactly how do i do that. I am asking whether it is possible and if yes then which approach should i take like for example python or something else and where do i start?
You need a OCR library such as opencv to recognize digits. The rest should be regular programming.
It is quite likely that your operating system doesn't allow you access to parts of the screen that are not owned by your application, so you are either blocked at this point, or you are restricted to parts of the screen owned by your application. (If I enter my details on the screen into my banking app, I definitely don't want another app to be able to read it).
Next you'd need to find away to read the pixels on the screen programatically. That will be very different from OS to OS, so very unlikely to be built into your language's library. You might be able to interface with whatever is availabe on your OS, or find a library that does it for you. This will give you an image, made of pixels.
Then you need some OCR software to read the text. AI doesn't seem to be involved in any of this.
Please help in choosing a tool for testing watermark/image overlay. The transparency can be 0%, it should not be a problem.
The application under test is a WPF desktop application on Windows, the autotests are written in Winappdriver + C#, now it looks like I have to take a screenshot of a specific element and compare the actual image with the ideal sample by a mask.
The product under test is a video camera with the ability to insert a logotype/watermark and/or additional details (date/name/address) on the image and video. The task is to verify automatically the correctness of the inserted logo and the correctness of the inserted details in the image/video (size, color, if the logo was mirrored after insert or whatever if a name was entered badly...).
At the moment I am thinking about using OpenCV or Sikuli. I know that Appium had something similar but it probably won't work with my driver.
It is also unclear how and what can be tested with video. Just to take one frame randomly and make a test for it as for an image?
Many thanks for your help and suggestions!
Perhaps not a complete answer to you your questions but a few words on how Sikuli works and what might be a disadvantage, if I understand your needs correctly. First of all, Sikuli is using OpenCV internally by calling the Imgproc.matchTemplate() function. There is not much control over it from Sikuli but you can set a minimum similarity score that varies between 0 (everything will match) and 1 (pixel perfect comparison). Given you intend to use it for video originated patterns, you'd want to be somewhere in the middle. Having said that, I am not sure what quality of comparison you'd like to obtain so not sure if the minimum similarity by itself will be enough.
Another thought is to integrate the OpenCv lib itself in your code and use it directly. This is not an easy task and some basic understand of image processing techniques might be required.
According to their website (http://www.gdpicture.com/products/managed-pdf/) you have the ability to extract fonts from a PDF file. However, I can't seem to find the functionality to do this. I have encountered several methods to add them, but none to extract them (and they don't show as embedded files). Has anyone tried to do this, or have experience with GdPicture?
Version: 14 (Current)
Disclosure: I am part of the ORPALIS technical staff that edits the GdPicture.NET SDK, that's why I know there's an ongoing communication about this already.
It is my understanding that you have a support case open for a merging issue relative to fonts and as you know, our development team is currently working on a fix that will solve it so I strongly recommend that you wait for them to finish.
There's no extraction of the embedded font as you might expect at the moment but the development team is also working on one, we will let you know as soon as it is available (it should be very soon).
You can get information about (already) embedded fonts using the GetFontCount, IsFontEmbedded, GetFontName and GetFontType methods.
You can also add new embedded fonts (of different types) using the AddFontFromFileU, AddStandardFont, AddTrueTypeFont, AddTrueTypeFontFromFile, AddTrueTypeFontFromFileU and AddTrueTypeFontU methods.
I have searched using many different terms and phrases, and waded through many pages of results, but I have (remarkably) not seen anyone else addressing, even asking, about, this issue. So here goes...
Ultimate Goal: Allow a user viewing a content-based page (may contain both text and images) within a Windows Store app to share that content with someone else.
Description
I am working on taking a fair amount of content and making it available for browsing/navigating as a Windows 8/WinRT/Windows Store (we need a consistent name here) application. One of the desired features is to take advantage of the Share Charm, such that someone viewing a page could share that page with someone else.
The ideal behavior is for the application to implement the Share Source contract which would share an email message that contained some explanatory text, a link to get the app from the Windows Store, and a "deep link" into the shared page in the application.
Solutions Considered
We had originally looked at just generating a PDF representation of the page, but there are very few external libraries that would work under WinRT, and having to include externally licensed code would be problematic as well. Writing our own PDF generation code would out of scope.
We have also considered generating a Word document or PowerPoint slide using OpenXML, but again, we run up against the limitaions of WinRT. In this case, it is highly unlikely the OpenXML SDK is useable in a WinRT application.
Another thought was to pre-generate all of the pages as .pdf files, store them as resources, and when the Share Charm is invoked, share the .pdf file associated with the current page. The problem here is the application will have at least 150 content pages, and depending on how we break the content down, up to over 600. This would likely cause serious bloat.
Where We Are At
Thus we have come to sharing URIs. From what I can tell, though, the "deep linking" feature is only intended for use on Secondary Tiles tied to your application. Another avenue I considered was registering a protocol like, "my-special-app:" with the OS and having it fire up the application but that would require HKCR registry access, which is outside the WinRT sandbox.
If it matters, we are leaning towards an HTML/JS application, rather than XAML/C#, because the converted content will all be in HTML and the WebView control in WinRT is fairly limited. This decision is not yet final, though.
Conclusion
So, is this possible, and if so, how would it be done or where can I find documentation on it?
Thanks,
Dave Parker
The stage in the Flash CS4 Authoring Enviroment is a running SWF. That what makes thing like the 3D and Bone Tools to work in the IDE.
Is it possible to access that swf ? I suspect the immediate answer would be no because that would raise some security issues maybe and cause lots of developers to crash the IDE every 5 minutes :).
That said I don't expect this to be a straight forward process, but I guess there should be a way to access that.
Any thoughts ?
I can only tell you how components work on the stage, where we've attempted the type of access you talk about.
I suspect that at their core, the 3d and bone tools are implemented using component-like tech to display the "live" stage instance. In general this would involve a compiled instance of a live preview swf that is placed on the stage. It is misleading to think of the stage as a single player. Each component preview runs in its own sandbox that, as far as I can tell, has no means of communication with other component previews on the IDE stage. There is no common storage location.
Of course, if you were in charge of the preview swf (as with the case of a component), you could try LocalConnection to chat, but the previews you want to penetrate are closed. I suspect if you dig hard enough, you'd find the bone/3d preview hidden in the installation folders (perhaps in a swc.. ik.swc looks interesting) and might be able to hack about at it with a decompiler, but straight out the box, I'm not sure there's a solution to what you ask.