I have a one spritesheet image with all sprites, I downloaded this image from Interent. My question: Is there a tool that help me to get the coordinates of each sprite? would I have to search coordinates manually ?
I've been reading several tutorials like:http://www.raywenderlich.com/1271/how-to-use-animations-and-sprite-sheets-in-cocos2d and http://indiedevstories.com/2011/04/10/using-sprite-sheets-in-cocos2d-and-tiled-part-1/, but all them use multiple images to create one spritesheet and generate automatically the .plist, but this is not my case.
In my case I have only one .png and no .plist
Help me please!!!
Apologies for the self-promotion but I have created a tool for working with single spritesheet images and outputting the coordinates. It even has automatic sprite selection :)
http://www.darkfunction.com/editor
You can crop the images using some editor (photoshop for example) and then use the exiting tool to generate the texture and the plist. I think it will be the fastest and the easiest way because in case you would want to generate only plist you will do the same operation - selecting the rectangle.
Use preview to crop out all the images separately. Then use zwoptex or texture packer to combine them all and generate the spritesheet with .png and .plist
Other than that it is difficult to figure out what is the rect of each image. If you are a good programmer you can write a tool to analyze the png and extract out the information which can be used to identify the rectangles, but my advice is to do it manually.
Divide your image using an image editor. Then add it to a spritesheet generator (like Sprite Master). Then you are free to make your output as you want by just changing the parameters.
Sprite Master will have a feature like parsing prepared spritesheet to individual images in following versions.
Related
Are there any plugins for ImageFlow.NET that enables me to auto crop images to focus on objects, not faces, in them? Or any other non-plugin way to do it with ImageFlow.NET?
Not yet, but it is something we're planning to add. In the meantime, you could try using https://github.com/softawaregmbh/smartcrop.net to get the crop coordinates then feeding them to Imageflow. It looks like a slow library, and the built-in encoding and resizing is very poor, but if you only use it to get coordinates you should be fine.
I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.
I have two button images, say "myButton.png" and "myButton#2x.png" in my app's bundle resources. At run time, I load the image using [NSImage imageNamed:#"myButton"] and need to split it into three slices and create three NSImages to use in NSDrawThreePartImage() function.
The problem is that each slice NSImage needs to be multi-resolution, so that the system could pick the right resolution dynamically, as it would automatically do with the original whole image.
How do you create the automatic multi-resolution NSImage programmatically? Thanks!
I think the appropriate solution is to store the image parts in different images and don’t try to code a workaround that splits one image into three parts. This way you can use NSDrawThreePartImage() directly and in the most efficient manner.
I'm trying to create an objective C classe for my iPad application which can convert a powerpoint file to a jpeg file.
Accordingly i've to read into the pptx format to see how the file is structured and create an image, from scratch, in which i can say this element goes there, this one here, this text there.
But actually i've no idea how to do this, if the best way is to use a already existing framework in iOS or an additional library?
Thanks to everyone ;)
Bye
The fastest way to visualize elements is, to me, OpenGL ES. You can use mobile GPU to visualize then there is CIImage for managing image.
Take a look at Quartz 2D, the drawing engine used as the main workhorse for 2D graphics on iOS. It gives you all the primitives for drawing shapes, fills, text and other objects you need to render the presentation.
I'm completely new to ArcGIS and ArcMap, but someone suggested this program to me for a project I'm working on.
I would like to animate individual entities on a map, and was wondering if it is possible to do so in ArcMap. I asked this earlier here and a member directed me to a tutorial on animating in ArcGIS. The animation in the guide was over a map spread (ie. each pixel on the map displays, say, a different color to indicate population data in the area). However I realized that if I zoom in a lot, eventually the image will degenerate into pixels, which is why I need an actual object to mark a certain point. I checked some online tutorials and it seems like we can place markers on the map. Can someone tell me if it is possible to animate these markers (for example via a for-loop)? And if so, could you point me in a direction where to start?
Thanks in advance!
You can animate layers in ArcMap is the short answer. Its not as simple as using the timeline feature in Google Earth for example though. But then ArcMap is much more than just a visualization tool.
This help page on the ESRI web help looks like a good place to start.
I'm not 100% sure what you mean by the image degenerates into pixels. Are you saying that the markers were single points in the layer. Unlike Google Earth you are not confined to simply plotting points on the map. You can draw completely arbitrary shapes in ArcMap, which can be defined to cover actual areas of the map, so when you zoom-in the shape gets larger.
The way you need to load data into ArcMap to produce an animation isn't too simple. There might be other ways to do this, but the way I know of is to generate a NetCDF file. This file contains a 3D matrix of layer data, where each layer is separated through time. Because you generate a matrix, you are effectively placing a raster image over the map. Thus if you want to cover a large area, each matrix becomes large, and you multiply that by the number of time slices you wish to animate over.
Once you have a NetCDF file with your data in however, getting ArcMap to animate it and produce say a .avi file is pretty simple.
You could try just loading some of the example NetCDF datasets into ArcMap to see how/if they will work to get you started.
Hope that helps.
The upcoming v10 will have better time-aware capabilities, which will allow for animation.