Bridge is packaged with a script that will load multiple files as their own layer in a Photoshop file. There are two problems when you do this with a vector file:
It converts the files to raster layers. And since you don't get to choose the size of the file beforehand, if they're too small, you can't scale them up without losing quality.
It doesn't preserve antialiasing, leaving ugly jagged edges on whatever art you imported.
Is there a way to import multiple files into Photoshop as vector smart objects? Then you'd have full control over the quality. Alternatively, is there a way to define the size of the vector files you're loading into layers and/or preserve their antialiasing?
I found a script that loads files into Photoshop as smart objects, but this has the same two problems the factory Bridge script has. It appears to do the exact same thing, but converts the layers to smart objects after they are imported.
The only way I currently know of to get vector smart objects into Photoshop is to do so manually one by one by copying from Illustrator or by dragging the files to an open Photoshop file. I'm looking for a way to automate the process.
I'm afraid doing it manually is the only way to get where you want to go. I've wrestled with this same issue for years and hope with every PS/Bridge update they'll add the option to load a stack of smart objects, but so far it's still old-school drag n' drop.
Hit the Adobe suggestion box... maybe with enough requests they'll finally add this as a native feature.
Related
I am currently developing a Labview application whose function (amongst others) is to copy and display images that are automatically updated every half a second or so. Depending on when the program copies the picture, it might not be fully generated, thus giving me an incomplete picture, as opposed to when the update is finished and I obtain a full picture
I would like to have a way to check whether the image is full or not. Using the size is not a viable option, as the amount of information and the colors on the images can vary. I don't have access to the image vision tools by the way, making my task more difficult than it should be.
Thank you for your help,
NFM
As a general solution, if you have access to the code for the generation of the image I would strongly suggest that you implement some additional logic that only replaces the image to be copied once it has a complete image.
Without using image processing you have to rely on some additional knowledge about the properties of image files. Once you have loaded the file into memory:
You can perform any of a wide set of processing. If the image is PNG then you have a couple of options:
Decode chunks and check for CRC Validity
Requires a lot of looping through but ensures 100% validity of the PNG
Search for the 'IEND' chunk
Quick and easy search for a matching 4byte value that should be located near the end of the file. Not a perfect confirmation of validity if the file generation is not linear.
I've VRML file that is 4.2GB big (!) and consists of 10 different shapes.
This is cloud of points (no edges or triangles).
How can I display such a big object? Everything I've tried just freezes.
Is there any tool I can use for point optimization in stream-like fashion?
Without loading the whole file?
We have developed our own tool for polygon reduction and some other optimizations. But if you say you have cloud of points and no triangles and edges, it is a difficult case.
I would try to make a separate VRML file from each shape of yours, and combine them with Inline in a big container VRML file. You may not see the whole scene, but you can possibly see the separate shapes. You can find tutorials for Inline if you google it. Hope this helps.
Is there a programmatic way to convert two images into an animation sequence (e.g., an animated GIF) like the following example?
This image sequence, taken from a http://memrise.com course, doesn't seem to have manually-edited frames, but seems automatically transformed using some kind shape morphing algorithm. Is there a common term used to describe such an animation or algorithm? Is there a feature in ImageMagick or Photoshop/Gimp that generates such animations, given a pair of images?
Ideally the technique could be scriptable so I could create animations for several pairs of start-end images.
Edit: I have just been told about Gimp's tool under Filters->Animation->Blend, which appears to do the same thing as jQuery morph: each frame i is start + (finish - start)/N*i. In other words, you're transitioning each pixel independently from the start value to the finish value, without any shape morphing. The example gives is more complicated, as it modifies the contours of both images to achieve its compelling effect.
Other examples:
http://static.memrise.com/uploads/mems/32000121024054535.gif
http://static.memrise.com/uploads/mems/225428000121109232837.gif
I have written a tool that doesn't require setting manual keypoints and is not restricted to a domain (like faces). Anyway, the images have to be similar (e.g. two faces or two cars from the same perspective).
https://github.com/kallaballa/Poppy
There is also a web-version created with emscripten.
I generated the above animation using following command line:
poppy flame.png glyph.png flame.png
Although this is an old question, since ImageMagick is mentioned, for anyone who comes here from google it may be worth looking at this imagemagick plugin called shapemorph.
GIMP can't do that directly, but over the years a series of (now poorly maintaind) plug-ins to do that where released by third parties. The keyword for searching for this is "morph" - you should find a bunch of stand alone programs to do that as well, from "gratis" to full fledged Free Software, such as xmorph
Given pairs of vector files (.wmf extension) it is possible to use linear interpolation of shapenodes in Visual Basic for Applications to create frames for GIF animations , though this would take along time to explain. For some examples see
http://www.giless.co.uk/animatorMorphGIFs.htm (it is like a slideshow)
I have made some improvements since then, as well!
I've got retina tile maps working, 15x10 tiles, of 64x64 tiles. problem is for non-retina devices I will need to make a 15x10 tiles of 32x32 tiles. I don't want to recreate the Tile, is it just a case of changing the XML (.tmx) file? Is there an automated tool or another way around this? I've been looking online but not getting too much help.
Thanks
You have to update the TMX file and scale certain attributes. Unless your TMX map is very simple this will be a tedious and error-prone task that's best left to a tool.
There are a variety of TMX rescaling tools out there, but some didn't work for me or simply were incomplete at the time (ie one didn't scale object layers). All the tools I know are generally are written in rather unusual languages (for an iOS developer at least) like Python, Ruby or Bash scripts. Others are only available as binary without the source code.
Check out this cocos2d forum post. Specifically this tool or HDx on the App Store. iTilemaps might also work for you.
Because I wasn't happy with either of the choices, I wrote my own command line tool tmx2scale in Objective-C to rescale TMX maps intelligently in all directions. The tmx2scale tool is not currently available but it will be distributed complete with source code with the KoboldScript Game Kit project.
This is by far not a showstopper problem just something I've been curious about for some time.
There is this well-known -[UIImage resizableImageWithCapInsets:] API for creating resizable images, which comes really handy when texturing variable size buttons and frames, especially on the retina iPad and especially if you have lots of those and you want to avoid bloating the app bundle with image resources.
The cap insets are typically constant for a given image, no matter what size we want to stretch it to. We can also put that this way: the cap insets are characteristic for a given image. So here is the thing: if they logically belong to the image, why don't we store them together with the image (as some kind of metadata), instead of having to specify them everywhere where we got to create a new instance?
In the daily practice, this could have serious benefits, mainly by means of eliminating the possibility of human error in the process. If the designer who creates the images could embed the appropriate cap values upon exporting in the image file itself then the developers would no longer have to write magic numbers in the code and maintain them updated each time the image changes. The resizableImage API could read and apply the caps automatically. Heck, even a category on UIImage would make do.
Thus my question is: is there any reliable way of embedding metadata in images?
I'd like to emphasize these two words:
reliable: I have already seen some entries on the optional PNG chunks but I'm afraid those are wiped out of existence once the iOS PNG optimizer kicks in. Or is there a way to prevent that? (along with letting the optimizer do its job)
embedding: I have thought of including the metadata in the filename similarly to what Apple does, i.e. "#2x", "~ipad" etc. but having kilometer-long names like "image-20.0-20.0-40.0-20.0#2x.png" just doesn't seem to be the right way.
Can anyone come up with smart solution to this?
Android has a filetype called nine-patch that is basically the pieces of the image and metadata to construct it. Perhaps a class could be made to replicate it. http://developer.android.com/reference/android/graphics/NinePatch.html