How do I see the actual color of a single RGB value in Google Colab? - google-colaboratory

Very basic question. I have a single vector (e.g., [53, 21, 110]) and I want to print the RGB color it represents in a colab notebook. Like a color swatch. What's the simplest way to do this?

The simplest way would be using the Image module from PIL. According to the documentation, you can construct an image with:
PIL.Image.new(mode, size, color=0)
mode [required]: determines the mode used for the image, it can be RGB, RGBA, HSV, etc. You can find more modes in the docs
size [required]: this is a tuple (weight, height) that represents the dimensions of your image in pixels.
color [optional]: this is the color of the image, it can receive a tuple to represent the RGB color in your case. The default color is black.
Then, to show the image within colab, you would use
display(img)
Given your question, the mode would need to be 'RGB' and if your vector is a list, you need to convert into a tuple to use it.
To show an 300px by 300px image, the code would look like.
from PIL import Image
img = Image.new('RGB', (300,300), color = (53, 21, 110))
display(img)

Related

Apply 3D mask to 3D mask, background is white

I try to apply a binary mask on a 3D image by multiplying them. It, however, returns an image with a white background, rather than black (what I would expect since the binary mask is 0 and 1, where all of the background pixels equal 0).
First I load the 3D scan/image (.nii.gz) and mask (.nii.gz) with nibabel, using:
scan = nib.load(path_to_scan).getfdata()
mask = nib.load(path_to_mask).getfdata()
Then I use:
masked_scan = scan*mask
Below visualized that when applying another mask, the background is darker..
enter image description here
Below visualized what they look like in 3D slicer as volume.
enter image description here
What am I missing? Aim is to have a black background...
I also tried np.where(mask==1, scan, mask*scan)

python numpy/scipy zoom changing center

I have a 2D numpy array, say something like:
import numpy as np
x = np.random.rand(100, 100)
Now, I want to keep zoom this image (keeping the size the same i.e. (100, 100)) and I want to change the centre of the zoom.
So, say I want to zoom keeping the point (70, 70) at the centre and normally how one would do it is to "translate" the image to that point and then zoom.
I wonder how I can achieve this with scipy. I wonder if there is way to specify say 4 coordinates from this numpy array and basically fill the canvas with the interpolated image from this region of interest?
You could use ndimage.zoom to do the zooming part. I use ndimage a lot, and it works well and is fast. https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.zoom.html
The 4 coordinates part you mention is I presume two corners of region you want to zoom into. That's easy by just using numpy slicing of your image (presuming your image is an np array):
your_image[r1:r2, c1:c2]
Assuming you want your output image at 100x100, then your r1-r2, and c1-c2 differences will be the same, so your region is square.
nd.zoom takes a zoom factor (float). You would need to compute whta athat zoom factor is in order to take your sliced image and turn it into a 100x100 sized array:
ndimage.zoom(your_image[r1:r2, c1:c2], zoom=your_zoom_factor)

How to Zero Pad RGB Image?

I want to Pad an RGB Image of size 500x500x3 to 512x512x3. I understand that I need to add 6 pixels on each border but I cannot figure out how. I have read numpy.pad function docs but couldn't understand how to use it. Code snippets would be appreciated.
If you need to pad 0:
RGB = np.pad(RGB, pad_width=[(6, 6),(6, 6),(0, 0)], mode='constant')
Use constant_values argument to pad different values (default is 0):
RGB = np.pad(RGB, pad_width=[(6, 6),(6, 6),(0, 0)], mode='constant', constant_values=0, constant_values=[(3,3),(5,5),(0,0)]))
We can try to get a solution by adding border padding, but it would get a bit complex. I would like to suggest you can alternate approach. First we can create a canvas of size 512x512 and then we place your original image inside this canvas. You can get help from the following code:
import numpy as np
# Create a larger black colored canvas
canvas = np.zeros(512, 512, 3)
canvas[6:506, 6:506] = your_500_500_img
Obviously you can convert 6 and 506 to a more generalized variable and use it as padding, 512-padding, etc. but this code illustrates the concept.

Objective C - Detect a "path" drawing, inside a map image

I have a physical map (real world), for example, a little town map.
A "path" line is painted over the map, think about it like "you are here. here's how to reach the train station" :)
Now, let's suppose I can get an image of that scenario (likewise, coming from a photo).
An image that looks like:
My goal is not easy way out!
I want to GET the path OUT of the image, i.e., separate the two layers.
Is there a way to extract those red marks from the image?
Maybe using CoreGraphics? Maybe an external library?
It's not an objective C specific question, but I am working on Apple iOS.
I already worked with something similar, the face-recognition.
Now the answer I expect is: "What do you mean by PATH?"
Well, I really don't know, maybe a line (see above image) of a completely different color from the 'major' colors in the background.
Let's talk about it.
If you can use OpenCV then it becomes simpler. Here's a general method:
Separate the image into Hue, Saturation and Variation (HSV colorspace)
Here's the OpenCV code:
// Compute HSV image and separate into colors
IplImage* hsv = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );
cvCvtColor( img, hsv, CV_BGR2HSV );
IplImage* h_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
IplImage* s_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
IplImage* v_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
cvCvtPixToPlane( hsv, h_plane, s_plane, v_plane, 0 );
Deal with the Hue (h_plane) image only as it gives just the hue without any change in value for a lighter or darker shade of the same color
Check which pixels have Red hue (i think red is 0 degree for HSV, but please check the OpenCV values)
Copy these pixels into a separate image
I's strongly suggest using the OpenCV library if possible, which is basically made for such tasks.
You could filter the color, define a threshold for what the color red is, then filter everything else to alpha, and you have left over what your "path" is?

How to replace all pixels of some color in a bitmap in Rebol?

Let's say I have a picture, I want to create some variations by changing a color. How to do this ?
I don't want to apply color filter to a picture, I want to change pixels color pixel by pixel by testing a color pixel if it is let's say red, i want to turn it to blue.
In Rebol images are also series, so you can use most of the series functions to change/find rgb colors etc.
i: load %test.png
type? i
image!
first i
255.255.255.0 (the last value is alpha)
change i 255.0.0.0 ;change the first rgba value to red
view layout [image i] ;you can see the upper-left pixel is now red
you can dump all rgba values in an image:
forall i [print first i]
you can also change a continues part:
change/dup head i blue 100 ;change first 100 pixels to blue
you can also work on i/rgb and i/alpha, these are binary values (bytes)
and you can use copy to get a part of an image:
j: copy/part at i 100x100 50x50 ;copy from 100x100 to 150x150 to a new image.
Use some of the image processing capabilities as documented here:
http://www.rebol.com/docs/view-guide.html
Demo program showing some of them in action here:
http://www.rebol.com/view/demos/gel.r