For the local maxima, could we set up some rule for the offset to make the curve only keep the plots we think they are peaks?
image FilterLocalMaxima1D( image spectrumIn, number range )
{
image spectrumOut := spectrumIn.ImageClone()
for( number dx = -range; dx<=range; dx++ )
spectrumout *= ( spectrumIn >= offset(spectrumIn,dx,0)?1:0)
spectrumout.SetName("Local maxima ("+range+") filtered")
return spectrumOut
}
Please don't post multiple, separate questions in one post - rather post them separately.
As for your #1
You can simply make any image positive by taking its absolute values everywhere:
img = abs(img)
As for your #2
No, offset will always operate on the full image expression.
As for your #3
You can certainly print results into a separate text window instead of the results window. See F1 help documentation here:
documentwindow win = NewScriptWindow("My Text", 100, 100, 600, 900 )
win.EditorWindowAddText( "The quick brown fox jumps over the lazy dog.\n" )
You can also save those text windows per script.
As for Excel: There is no direct functionality for that, but you might be able to do something creative with the LaunchExternalProcess() command.
Also, you can use the command ScrapCopy() as equivalent to pressing CTRL + C on an image, and then just paste that into Excel. (Copy & Paste of lineplot data will give you the calibrated XY table. At least for recent GMS versions.)
Related
I remember that is a function to calculate the differential/derivative for a line plot in a DM version, it looks lik in the process- non linear filter- derivative. But I do not remember which version has this function, any suggestion?
The UI functionality for spectral filtering is found in the Spectrum menu:
Since GMS 3 this functionality is part oft the free software, before it was part of the Spectroscopy license (any).
The menu only works on line profiles which are spectra, for which the Convert Data To menu would be used when required.
As all "menu" commands, you can access them using the ChooseMenuItem command as in:
GetFrontImage().SelectImage() // Make sure the image window is selected, or the menu is disabled if the script-window is frontmost!
ChooseMenuItem("Spectrum","Numerical Filters","First derivative")
The mathematical functions behind this menu are also available as (unofficial, undocumented) script commands. They do not use the preferences but the parameters directly, using uncalibrated 'channel' scale.
So you could also use:
image src := GetFrontImage()
number chWidth = 5 // The values matching the settings
number chDelta = 1 // The values matching the settings
number chShift = trunc((chWidth + chDelta)/2 + 0.5)
number norm = chWidth + chDelta
image fDev := src.FDeriv_Spectrum( chWidth, chShift, norm )
fDev.ShowImage()
Just be warned that there is no guarantee that the command FDeriv_Spectrum will be continued in future versions of GMS (It is not an officially supported command.)
Finally, the math of a first derivative are really simple, so you could just recreate the function with pure DM-script commands like offset and arithmetic operators.
A simple, non-smoothed 1-channel derivative would simply be:
image src := GetFrontImage()
image fdev := src - src.offset(-1,0)
fdev.ShowImage()
I wrote a little program that copies the frame and position of a ROI from an image to an other image of the same size.
What I want to do now is to connect the two ROIs in way that when I move one ROI the other one is moving accordingly.
On Dave's mitchell DM scripting website, I found that he used the function ConnectObject. but he does not explain how it works.
I read the DM3's documentation and I couldn't find any information about that function.
There are two concepts here which would work. You can use one of two methods:
1) Use "ConnectObject" to attach some functionality to when a ROI is moved, i.e. when you move ROI 1 it "triggers" code which you can use to update other rois.
2) Use "ImageDisplayListeners" to attach functionality to when any ROI on a specific imageDisplay is moved,
i.e. when a ROI an image A is moved it triggers code which you can use to update other rois.
You will find example code in this answer.
For simple things there is another option:
Adding the identical ROI to more than one image-display:
In this case, the ROIs are "linked" automatically, because they really are only a single object in memory (but displayed on two displays.) Changing one will change the other.
However, this linkage is "lost" if you save/load the images, because when you load the image, all ROIs (in memory) are newly created. Here is some simple example code:
image img1, img2
GetTwoLabeledImagesWithPrompt("Select two images of same size.", "Select", "Source", img1, "Destination", img2 )
imageDisplay disp1 = img1.ImageGetImageDisplay( 0 )
imageDisplay disp2 = img2.ImageGetImageDisplay( 0 )
number nR = disp1.ImageDisplayCountROIs()
for ( number i = 0; i<nR; i++ )
{
ROI theROI = disp1.ImageDisplayGetROI(i)
disp2.ImageDisplayAddROI(theROI)
}
I have a set of images and want to make a cross matching between all and display the results using trackbars using OpenCV 2.4.6 (ROS Hydro package). The matching part is done using a vector of vectors of vectors of cv::DMatch-objects:
image[0] --- image[3] -------- image[8] ------ ...
| | |
| cv::DMatch-vect cv::DMatch-vect
|
image[1] --- ...
|
image[2] --- ...
|
...
|
image[N] --- ...
Because we omit matching an image with itself (no point in doing that) and because a query image might not be matched with all the rest each set of matched train images for a query image might have a different size from the rest. Note that the way it's implemented right I actually match a pair of images twice, which of course is not optimal (especially since I used a BruteForce matcher with cross-check turned on, which basically means that I match a pair of images 4 times!) but for now that's it. In order to avoid on-the-fly drawing of matched pairs of images I have populated a vector of vectors of cv::Mat-objects. Each cv::Mat represents the current query image and some matched train image (I populate it using cv::drawMatches()):
image[0] --- cv::Mat[0,3] ---- cv::Mat[0,8] ---- ...
|
image[1] --- ...
|
image[2] --- ...
|
...
|
image[N] --- ...
Note: In the example above cv::Mat[0,3] stands for cv::Mat that stores the product of cv::drawMatches() using image[0] and image[3].
Here are the GUI settings:
Main window: here I display the current query image. Using a trackbar - let's call it TRACK_QUERY - I iterate through each image in my set.
Secondary window: here I display the matched pair (query,train), where the combination between the position of TRACK_QUERY's slider and the position of the slider of another trackbar in this window - let's call it TRACK_TRAIN - allows me to iterate through all the cv::Mat-match-images for the current query image.
The issue here comes from the fact that each query can have a variable number of matched train images. My TRACK_TRAIN should be able to adjust to the number of matched train images, that is the number of elements in each cv::Mat-vector for the current query image. Sadly so far I was unable to find a way to do that. The cv::createTrackbar() requires a count-parameter, which from what I see sets the limit of the trackbar's slider and cannot be altered later on. Do correct me if I'm wrong since this is exactly what's bothering me. A possible solution (less elegant and involving various checks to avoid out-of-range erros) is to take the size of the largest set of matched train images and use it as the limit for my TRACK_TRAIN. I would like to avoid doing that if possible. Another possible solution involves creating a trackbar per query image with the appropriate value range and swap each in my secondary windows according to the selected query image. For now this seems to be the more easy way to go but poses a big overhead of trackbars not to mention that fact that I haven't heard of OpenCV allowing you to hide GUI controls. Here are two example that might clarify things a little bit more:
Example 1:
In main window I select image 2 using TRACK_QUERY. For this image I have managed to match 5 other images from my set. Let's say those are image 4, 10, 17, 18 and 20. The secondary window updates automatically and shows me the match between image 2 and image 4 (first in the subset of matched train images). TRACK_TRAIN has to go from 0 to 4. Moving the slider in both directions allows me to go through image 4, 10, 17, 18 and 20 updating each time the secondary window.
Example 2:
In main window I select image 7 using TRACK_QUERY. For this image I have managed to match 3 other images from my set. Let's say those are image 0, 1, 11 and 19. The secondary window updates automatically and shows me the match between image 2 and image 0 (first in the subset of matched train images). TRACK_TRAIN has to go from 0 to 2. Moving the slider in both directions allows me to go through image 0, 1, 1 and 19 updating each time the secondary window.
If you have any questions feel free to ask and I'll to answer them as well as I can. Thanks in advance!
PS: Sadly the way the ROS package is it has the bare minimum of what OpenCV can offer. No Qt integration, no OpenMP, no OpenGL etc.
After doing some more research I'm pretty sure that this is currently not possible. That's why I implemented the first proposition that I gave in my question - use the match-vector with the most number of matches in it to determine a maximum size for the trackbar and then use some checking to avoid out-of-range exceptions. Below there is a more or less detailed description how it all works. Since the matching procedure in my code involves some additional checks that does not concern the problem at hand, I'll skip it here. Note that in a given set of images we want to match I refer to an image as object-image when that image (example: card) is currently matched to a scene-image (example: a set of cards) - top level of the matches-vector (see below) and equal to the index in processedImages (see below). I find the train/query notation in OpenCV somewhat confusing. This scene/object notation is taken from http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html. You can change or swap the notation to your liking but make sure you change it everywhere accordingly otherwise you might end up with a some weird results.
// stores all the images that we want to cross-match
std::vector<cv::Mat> processedImages;
// stores keypoints for each image in processedImages
std::vector<std::vector<cv::Keypoint> > keypoints;
// stores descriptors for each image in processedImages
std::vector<cv::Mat> descriptors;
// fill processedImages here (read images from files, convert to grayscale, undistort, resize etc.), extract keypoints, compute descriptors
// ...
// I use brute force matching since I also used ORB, which has binary descriptors and HAMMING_NORM is the way to go
cv::BFmatcher matcher;
// matches contains the match-vectors for each image matched to all other images in our set
// top level index matches.at(X) is equal to the image index in processedImages
// middle level index matches.at(X).at(Y) gives the match-vector for the Xth image and some other Yth from the set that is successfully matched to X
std::vector<std::vector<std::vector<cv::DMatch> > > matches;
// contains images that store visually all matched pairs
std::vector<std::vector<cv::Mat> > matchesDraw;
// fill all the vectors above with data here, don't forget about matchesDraw
// stores the highest count of matches for all pairs - I used simple exclusion by simply comparing the size() of the current std::vector<cv::DMatch> vector with the previous value of this variable
long int sceneWithMaxMatches = 0;
// ...
// after all is ready do some additional checking here in order to make sure the data is usable in our GUI. A trackbar for example requires AT LEAST 2 for its range since a range (0;0) doesn't make any sense
if(sceneWithMaxMatches < 2)
return -1;
// in this window show the image gallery (scene-images); the user can scroll through all image using a trackbar
cv::namedWindow("Images", CV_GUI_EXPANDED | CV_WINDOW_AUTOSIZE);
// just a dummy to store the state of the trackbar
int imagesTrackbarState = 0;
// create the first trackbar that the user uses to scroll through the scene-images
// IMPORTANT: use processedImages.size() - 1 since indexing in vectors is the same as in arrays - it starts from 0 and not reducing it by 1 will throw an out-of-range exception
cv::createTrackbar("Images:", "Images", &imagesTrackbarState, processedImages.size() - 1, on_imagesTrackbarCallback, NULL);
// in this window we show the matched object-images relative to the selected image in the "Images" window
cv::namedWindow("Matches for current image", CV_WINDOW_AUTOSIZE);
// yet another dummy to store the state of the trackbar in this new window
int imageMatchesTrackbarState = 0;
// IMPORTANT: again since sceneWithMaxMatches stores the SIZE of a vector we need to reduce it by 1 in order to be able to use it for the indexing later on
cv::createTrackbar("Matches:", "Matches for current image", &imageMatchesTrackbarState, sceneWithMaxMatches - 1, on_imageMatchesTrackbarCallback, NULL);
while(true)
{
char key = cv::waitKey(20);
if(key == 27)
break;
// from here on the magic begins
// show the image gallery; use the position of the "Images:" trackbar to call the image at that position
cv::imshow("Images", processedImages.at(cv::getTrackbarPos("Images:", "Images")));
// store the index of the current scene-image by calling the position of the trackbar in the "Images:" window
int currentSceneIndex = cv::getTrackbarPos("Images:", "Images");
// we have to make sure that the match of the currently selected scene-image actually has something in it
if(matches.at(currentSceneIndex).size())
{
// store the index of the current object-image that we have matched to the current scene-image in the "Images:" window
int currentObjectIndex = cv::getTrackbarPos("Matches:", "Matches for current image");
cv::imshow(
"Matches for current image",
matchesDraw.at(currentSceneIndex).at(currentObjectIndex < matchesDraw.at(currentSceneIndex).size() ? // is the current object index within the range of the matches for the current object and current scene
currentObjectIndex : // yes, return the correct index
matchesDraw.at(currentSceneIndex).size() - 1)); // if outside the range show the last matched pair!
}
}
// do something else
// ...
The tricky part is the trackbar in the second window responsible for accessing the matched images to our currently selected image in the "Images" window. As I've explained above I set the trackbar "Matches:" in the "Matches for current image" window to have a range from 0 to (sceneWithMaxMatches-1). However not all images have the same amount of matches with the rest in the image set (applies tenfold if you have done some additional filtering to ensure reliable matches for example by exploiting the properties of the homography, ratio test, min/max distance check etc.). Because I was unable to find a way to dynamically adjust the trackbar's range I needed a validation of the index. Otherwise for some of the images and their matches the application will throw an out-of-range exception. This is due to the simple fact that for some matches we try to access a match-vector with an index greater than it's size minus 1 because cv::getTrackbarPos() goes all the way to (sceneWithMaxMatches - 1). If the trackbar's position goes out of range for the currently selected vector with matches, I simply set the matchDraw-image in "Matches for current image" to the very last in the vector. Here I exploit the fact that the indexing can't go below zero as well as the trackbar's position so there is not need to check this but only what comes after the initial position 0. If this is not your case make sure you check the lower bound too and not only the upper.
Hope this helps!
This question relates to a previous question I have asked.
I have a series of 48 textures on flat square meshes that I am rendering and they all combine to form one "scene." They each have a large percentage of of transparency with one or two smaller images, and when they are line up, I should be able to see the full scene. I expected this would work without much issue, but when when I went to test it, I see the top-most texture, and then anywhere it would have transparency, it is just the clear color.
At first, I thought it was an issue with how I was loading the image and somehow was disabling the alpha, but after playing around with the clear color, I realized that there was some transparency.
Second, I tried was to enable blending - this works if all the textures get combined on a single z plane.
I have posted my image loading and blending code on the question I linked to above.
Now I am starting to think it may be an issue with the depth buffer, so I added the following code to my window dependent resources:
Microsoft::WRL::ComPtr<ID3D11DepthStencilState> DepthDefault;
D3D11_DEPTH_STENCIL_DESC depthstencilDesc;
ZeroMemory(&depthstencilDesc, sizeof(depthstencilDesc));
depthstencilDesc.DepthEnable = FALSE;
depthstencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthstencilDesc.DepthFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.StencilEnable = FALSE;
depthstencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilState(&depthstencilDesc, DepthDefault.GetAddressOf() ) );
direct3d.context->OMSetDepthStencilState(DepthDefault.Get(), 0);
Even with this code, I am only seeing the topmost layer. Am I missing something, or am I setting something incorrectly?
Edit: To visualize the problem, it's as if I had 48 panes of glass that are all the same size and they are all in a row. Each piece of glass has one image somewhere on it. When you look through all the glass panes, you get one extra awesome image of all the smaller images combined. For me, directx or the pixel shader is only drawing the first glass pane and filling all the transparency of the first pane with the clear/background color.
Edit: The code I'm using to create the depthstencilview:
CD3D11_TEXTURE2D_DESC depthStencilDesc( DXGI_FORMAT_D24_UNORM_S8_UINT, backBufferDesc.Width, backBufferDesc.Height, 1, 1, D3D11_BIND_DEPTH_STENCIL );
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed( direct3d.device->CreateTexture2D( &depthStencilDesc, nullptr, &depthStencil ) );
auto viewDesc = CD3D11_DEPTH_STENCIL_VIEW_DESC(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilView( depthStencil.Get(), &viewDesc, &direct3d.depthStencil ) );
That code is literally right above my depth test/ D3D11_DEPTH_STENCIL_DESC code. I'm presuming that this creates the depth code.
I think you might need to sort the order in which you render your vertices if you want to render semi-transparencies with a depth buffer. If you don't want to use a depth buffer - perhaps just don't define/create/set it?
I am having a really strange problem with Pygame, and it's got me stumped for the last few (OK, more like 5) hours. There are free programs out there for making photo mosaics, but ever since my early days tinkering with VB5, I've wanted to write my own version. You know how that is. I have all kinds of cool parts written for loading source images, finding color averages and everything. But here I'm stuck and confused. So very stuck and confused.
This part of the program converts a 'target image' (the one that will be made up of small source images) to smaller blocks of color that other source images will try to match and then replace eventually. But for some reason, the size of the blocks keeps increasing with every iteration. I've tried so many different things that I've had to go through the script and delete out a bunch of things and add a couple more comments before posting.
The target img is a 1280x800 random google image, but any other picture should work just the same. Watch as the Y size of the blit increases with every block going down, and the X size increases as new rows are made. I hard coded in a set size for the solid color rectangle (2 pixels across, much smaller than I'll use), but for whatever reason this keeps increasing. The first row of blits is so small right now that it's hard to see. That quickly changes.
**Here's the link to what I am the image I'm using (http://www.travelimg.org/wallpapers/2012/01/iceland-golden-falls-druffix-europe-golden-falls-golden-falls-iceland-natur-waterfall-waterfalls-800x1280.jpg), but any other pic/size renamed to target.jpg should do the same.
If anyone can point me in the right direction it would be much appreciated. I want to cover this whole source pic in nice 12x12 blocks of solid color to start with. I can't figure out what is changing these block sizes as it goes.
import pygame
import os
from time import sleep
okformats = ['png','jpg','bmp','pcx','tif','lbm','pbm','pgm','ppm','xpm']
targetimg = 'C:\\Python27\\mosaic\\target.jpg'
if targetimg[-3:] not in okformats:
print 'That format is unsupported, get ready for some errors...'
else:
print 'Loading...'
pygame.init()
screen = pygame.display.set_mode((100,100)) #picked a size just to start it out
clock = pygame.time.Clock() #possibly not needed in this script
targetpic = pygame.image.load(targetimg).convert()
targetrect = targetpic.get_rect() #returns something like [0,0,1280,800]
targetsize = targetrect[2:]
targetw = targetrect[2]
targeth = targetrect[3]
numpicsx = 100 #number of pictures that make up the width
sourceratio = 1 #testing with square pics for now
picxsize = targetw/numpicsx
numpicsy = targeth/(picxsize*sourceratio)
picysize = targeth/numpicsy
print 'Blitting target image'
screen = pygame.display.set_mode(targetsize)
screen.fill((255,255,255)) #set to white in case of transparency
screen.blit(targetpic,(0,0))
#update screen
pygame.display.update()
pygame.display.flip()
clock.tick(30)
SLOWDOWN = .1 #temp slow down to watch it
print numpicsx #here are some print statements just to show all the starting values are correct
print numpicsy
print '---'
print picxsize
print picysize
sleep(1)
for x in xrange(numpicsx):
for y in xrange(numpicsy):
currentrect = [x*picxsize,y*picysize,x*picxsize+picxsize,y*picysize+picysize]
avgc = pygame.transform.average_color((targetpic), currentrect) #average color
avgc = avgc[:3] #drops out the alpha if there is one
pygame.draw.rect(screen, avgc, currentrect)
#pygame.draw.rect(screen, avgc, (currentrect[0],currentrect[1],currentrect[0]+2,currentrect[1]+2)) #hard coded 2s (rather than 12s in this case) to help pin point the problem
pygame.display.update()
pygame.display.flip()
clock.tick(30) #probably not needed
sleep(SLOWDOWN)
print 'Done./nSleeping then quitting...'
sleep(3)
pygame.quit()
A friend of mine took a look at my code and showed me the problem. I was thinking that the rect format for drawing was (x1,y1,x2,y2), but it's actually (x,y,width,height). This is the new line:
currentrect = [x*picxsize,y*picysize,picxsize,picysize]
I also dropped the clock.tick(30) lines to speed it all up.