I want to change the alpha value for all pixels which are black on an image and hitting some real hassles with this. The solutions out there do not work for me. I load a jpg then convert the image using QImage::Format_ARGB32
qDebug() << "OUT IMAGE FORMAT HAS ALPHA CHANNEL " << outImage.hasAlphaChannel();
The above shows as true. I then try:
for (int y = 0; y < outImage.height(); y++) {
for (int x = 0; x < outImage.width(); x++) {
QColor c = outImage.pixel(x, y);
outImage.setPixel(x,y, (uint) qRgba((int) c.red(), (int) c.green(), (int) c.blue(), 0)) ;
}
}
mainIm->setPixmap(QPixmap::fromImage(outImage));
With the above nothing happens. If I use:
outImage.setPixel(x,y, (uint) qRgba(255,255,255,255));
I see a perfect solid white square that replaces the image. As expected.
If I use:
outImage.setPixel(x,y, (uint) qRgba(255,255,255,100));
I see the original image with transparent white square at opacity 100;
If I use:
outImage.setPixel(x,y, (uint) qRgba(0, 0, 0, 0)) ;
Nothing happens.
I've tried so many variations based on the solutions here on SO and QT forums my head now hurts so I need to ask for help :-( I was expecting to use in combination with scanline for performance but not too important with respect to speed.
All help appreciated.
Thanks.
Related
I'm making an OS X app which creates a color scheme from the main colors of an image.
As a first step, I'm using NSCountedSet and colorAtX to get all the colors from an image and count their occurrences:
func sampleImage(#width: Int, height: Int, imageRep: NSBitmapImageRep) -> (NSCountedSet, NSCountedSet) {
// Store all colors from image
var colors = NSCountedSet(capacity: width * height)
// Store the colors from left edge of the image
var leftEdgeColors = NSCountedSet(capacity: height)
// Loop over the image pixels
var x = 0
var y = 0
while x < width {
while y < height {
// Instruments shows that `colorAtX` is very slow
// and using `NSCountedSet` is also very slow
if let color = imageRep.colorAtX(x, y: y) {
if x == 0 {
leftEdgeColors.addObject(color)
}
colors.addObject(color)
}
y++
}
// Reset y every x loop
y = 0
// We sample a vertical line every x pixels
x += 1
}
return (colors, leftEdgeColors)
}
My problem is that this is very slow. In Instruments, I see there's two big bottlenecks: with NSCountedSet and with colorAtX.
So first I thought maybe replace NSCountedSet by a pure Swift equivalent, but the new implementation was unsurprisingly much slower than NSCountedSet.
For colorAtX, there's this interesting SO answer but I haven't been able to translate it to Swift (and I can't use a bridging header to Objective-C for this project).
My problem when trying to translate this is I don't understand the unsigned char and char parts in the answer.
What should I try to scan the colors faster than with colorAtX?
Continue working on adapting the Objective-C answer because it's a good answer? Despite being stuck for now, maybe I can achieve this later.
Use another Foundation/Cocoa method that I don't know of?
Anything else that I could try to improve my code?
TL;DR
colorAtX is slow, and I don't understand how to adapt this Objective-C answer to Swift because of unsigned char.
The fastest alternative to colorAtX() would be iterating over the raw bytes of the image using let bitmapBytes = imageRep.bitmapData and composing the colour yourself from that information, which should be really simple if it's just RGBA data. Instead of your for x/y loop, do something like this...
let bitmapBytes = imageRep.bitmapData
var colors = Dictionary<UInt32, Int>()
var index = 0
for _ in 0..<(width * height) {
let r = UInt32(bitmapBytes[index++])
let g = UInt32(bitmapBytes[index++])
let b = UInt32(bitmapBytes[index++])
let a = UInt32(bitmapBytes[index++])
let finalColor = (r << 24) + (g << 16) + (b << 8) + a
if colors[finalColor] == nil {
colors[finalColor] = 1
} else {
colors[finalColor]!++
}
}
You will have to check the order of the RGBA values though, I just guessed!
The quickest way to maintain a count might just be a [Int, Int] dictionary of pixel values to counts, doing something like colors[color]++. Later on if you need to you can convert that to a NSColor using NSColor(calibratedRed red: CGFloat, green green: CGFloat, blue blue: CGFloat, alpha alpha: CGFloat)
To detect 3D world coordinates through the 2D screen coordinates of the iOS, is there any other possible way besides the gluUnProject port?
I've been fiddling around with this days on end now, and I can't seemingly get the hang of it.
-(void)receivePoint:(CGPoint)loke
{
GLfloat projectionF[16];
GLfloat modelViewF[16];
GLint viewportI[4];
glGetFloatv(GL_MODELVIEW_MATRIX, modelViewF);
glGetFloatv(GL_PROJECTION_MATRIX, projectionF);
glGetIntegerv(GL_VIEWPORT, viewportI);
loke.y = (float) viewportI[3] - loke.y;
float nearPlanex, nearPlaney, nearPlanez, farPlanex, farPlaney, farPlanez;
gluUnProject(loke.x, loke.y, 0, modelViewF, projectionF, viewportI, &nearPlanex, &nearPlaney, &nearPlanez);
gluUnProject(loke.x, loke.y, 1, modelViewF, projectionF, viewportI, &farPlanex, &farPlaney, &farPlanez);
float rayx = farPlanex - nearPlanex;
float rayy = farPlaney - nearPlaney;
float rayz = farPlanez - nearPlanez;
float rayLength = sqrtf((rayx*rayx)+(rayy*rayy)+(rayz*rayz));
//normalizing rayVector
rayx /= rayLength;
rayy /= rayLength;
rayz /= rayLength;
float collisionPointx, collisionPointy, collisionPointz;
for (int i = 0; i < 50; i++)
{
collisionPointx = rayx * rayLength/i*50;
collisionPointy = rayy * rayLength/i*50;
collisionPointz = rayz * rayLength/i*50;
}
}
There's a good chunk of my code. Yeah, I could have easily used a struct but I was too mentally fat to do it at the time. That's something I could go back and fix later.
Anywho, the point is that when I output to the debugger using NSLog after I use gluUnProject, the nearplane's and farplane's don't relay results even close to accurate. In fact, they both relay the exact same results, not to mention, the first click always reproduces x, y, & z being all equal to "nan."
Am I skipping over something extraordinarily important here?
There is no gluUnProject function in ES2.0, what is this port that you are using? Also there is no GL_MODELVIEW_MATRIX or GL_PROJECTION_MATRIX, which is most likely your problem.
I'm using the Microsoft Kinect SDK to get the depth and color information from a Kinect and then convert that information into a point cloud. I need the depth information to be in real world coordinates with the centre of the camera as the origin.
I've seen a number of conversion functions but these are apparently for OpenNI and non-Microsoft drivers. I've read that the depth information coming from the Kinect is already in millimetres, and is contained in the 11bits... or something.
How do I convert this bit information into real world coordinates that I can use?
Thanks in advance!
This is catered for within the Kinect for Windows library using the Microsoft.Research.Kinect.Nui.SkeletonEngine class, and the following method:
public Vector DepthImageToSkeleton (
float depthX,
float depthY,
short depthValue
)
This method will map the depth image produced by the Kinect into one that is vector scalable, based on real world measurements.
From there (when I've created a mesh in the past), after enumerating the byte array in the bitmap created by the Kinect depth image, you create a new list of Vector points similar to the following:
var width = image.Image.Width;
var height = image.Image.Height;
var greyIndex = 0;
var points = new List<Vector>();
for (var y = 0; y < height; y++)
{
for (var x = 0; x < width; x++)
{
short depth;
switch (image.Type)
{
case ImageType.DepthAndPlayerIndex:
depth = (short)((image.Image.Bits[greyIndex] >> 3) | (image.Image.Bits[greyIndex + 1] << 5));
if (depth <= maximumDepth)
{
points.Add(nui.SkeletonEngine.DepthImageToSkeleton(((float)x / image.Image.Width), ((float)y / image.Image.Height), (short)(depth << 3)));
}
break;
case ImageType.Depth: // depth comes back mirrored
depth = (short)((image.Image.Bits[greyIndex] | image.Image.Bits[greyIndex + 1] << 8));
if (depth <= maximumDepth)
{
points.Add(nui.SkeletonEngine.DepthImageToSkeleton(((float)(width - x - 1) / image.Image.Width), ((float)y / image.Image.Height), (short)(depth << 3)));
}
break;
}
greyIndex += 2;
}
}
By doing so, the end result from this is a list of vectors stored in millimeters, and if you want centimeters multiply by 100 (etc.).
By default, wxGrid shows a small ( 10 pixels? ) blank border on the right hand side, after the last column. Calling SetMargins() has no effect on it.
It is irritating, but I can live with it.
However, if I set the the row label width to zero then the blank border grows much larger. If I have just one column, the effect is horrible. It looks like wxGrid is leaving room for the non-existent label.
myPatGrid = new wxGrid(panel,IDC_PatGrid,wxPoint(10,10),wxSize(150,300) );
myPatGrid->SetRowLabelSize(0);
myPatGrid->CreateGrid(200,1);
myPatGrid->SetColLabelValue(0,L"Patient IDs");
Is there a way to remove this border?
Note that if I set the size of the wxgrid window to narrower in the wxGrid constructor, hoping to hide the border, I now get a horizontal scroll bar which is horrible too.
myPatGrid = new wxGrid(panel,IDC_PatGrid,wxPoint(10,10),wxSize(100,300) );
myPatGrid->SetRowLabelSize(0);
myPatGrid->CreateGrid(200,1);
myPatGrid->SetColLabelValue(0,L"Patient IDs");
Gives me
I just upgraded to wxWidgets v2.8.12 - problem still exists.
I didn't find an "autosize" function to fit columns in the grid space.
As a workaround, if you have only one column set its width to
myPatGrid->SetColMinimalWidth(0, grid_width - wxSYS_VSCROLL_X - 10)
otherwise, sum other column's width and adapt the last one to fit the remaining space (minus scrollbar width, minus 10).
EDIT: I have a working example, which produces this:
int gridSize = 150;
int minSize = gridSize - wxSYS_VSCROLL_X - 2; // scrollbar appear if higher
grid->SetRowLabelSize(0);
grid->SetColMinimalWidth(0, minSize);
grid->SetColSize(0, minSize); // needed, otherwise column will not resize
grid->ForceRefresh();
grid->SetColLabelValue(0, "COORD");
EDIT2: I succeded to remove the remaining margin with this:
int gridSize = 150;
int minSize = gridSize - 16; // trial & error
grid->SetMargins(0 - wxSYS_VSCROLL_X, 0);
Solving something similar yesterday I would like to contribute with following what does the job for me. Perhaps this is going to help someone else:
void RecalculateGridSize(wxGrid *grid, int cols) {
if (grid == NULL)
return;
grid->AutoSizeColumns();
float cumulative = 0, param = 0;
for (int i = 0; i < cols; ++i)
cumulative += grid->GetColSize(i);
//not stretching when client size lower then calculated
if(grid->GetClientSize().x < cumulative)
return;
param = (float) grid->GetClientSize().x / cumulative;
for (int i = 0; i < cols; ++i) {
if (i != cols - 1)
grid->SetColSize(i, int(grid->GetColSize(i)*param) - 2); //-2 for each line per column
else
grid->SetColSize(i, int(grid->GetColSize(i)*param)); //leaving last column full to fill properly
}
}
Note: This is doing particularly well when linked with OnSize() event.
I'm new to the OpenNI and I'm trying to create a simple ImageGenerator that just display a pure color image, say white, I modified the “SampleModule” and in the UpdateData() method I assign the *pPixel value with 255. The UpdateData() method is as following
XnStatus SampleImage::UpdateData()
{
XnStatus nRetVal = XN_STATUS_OK;
XnUInt8* pPixel = m_pImageMap;
for (XnUInt y = 0; y < 300; ++y)
{
for (XnUInt x = 0; x < 400; ++x, ++pPixel)
{
*pPixel = (XnUInt8)255;
}
}
m_nFrameID++;
m_nTimestamp += 1000000 / SUPPORTED_FPS;
// mark that data is old
m_bDataAvailable = FALSE;
return (XN_STATUS_OK);
}
The code compile fine, and I could register it with nireg, but when I try to read the image pixel value from the data generated by the module I got some strange value (not 255 as I expected), I use the following code to read the pixel value.
const XnUInt8* pImageMap = mImageGenerator.GetImageMap();
for (XnUInt y = 0; y < 300; ++y)
{
for (XnUInt x = 0; x < 400; ++x, ++pImageMap)
{
cout << (int)*pImageMap << endl;
}
}
and also when I run the “NiViewer” the program still say it can't find the image node, but the “SampleModule” can be find as a depth.
Any advice would be appreciate.
Thanks a million,
Haolin Wei
Check if you did following things:
1. set color format, i.e, rgb (or YUV)
2. set correct value for each pixel in the updataData(), i.e. r=255,g=255,b=255