Google Meet background Blur - tensorflow

I was curious of the new "turn on/off" background blur functionality of Google Meet (currently in test). I have investigated a bit and it seems it is using Tensorflow Lite models:
segm_heavy.tflite
segm_lite.tflite
via WASM
mediapipe_wasm_simd.wasm
while the model graph should be
background_blur_graph.binarypb
The model seems works at the level of the HTMLCanvasElement as far as I can see. Anyone aware of a similar model?
[UPDATE]
Thanks to Jason Mayes and Physical Ed I was able to reproduce a very close background blur effect in the Google's BodyPix demo
The settings of the application are showed in the Controls box. There is a backgroundBlurAmount option that let you customize the blur percentage to apply as well.
The result is almost close to the official Google Meet application.

majority of segmentation models give alpha channel as a result (some give more, but alpha is most useful) - what is masked and what is not
so if you want to blur the background, its a multi-step process:
resize input to model expected size
run model to get alpha channel
resize output back to original size
draw original image on canvas
draw alpha channel over it so only foreground stays visible
for example using ctx.globalCompositeOperation = 'darken'
optionally blur it a bit since model output is never perfect
for example using ctx.filter = blur(8px)`;
so if you want to blur the background, simply apply apply blur simply copy canvas from 4, apply blur on it and draw if back before going to step 5
regarding models, google meet is not bad but i had better results with google selfie model
bodypix is older model, great configurability but not that great results
example code: https://github.com/vladmandic/human/blob/main/src/segmentation/segmentation.ts

Related

Capture a live video of handwriting using pen and paper and replace the hand in video with some object or cursor

I want to process the captured video. I will try to capture the video of handwriting on paper / drawing on paper. But I do not want to show the hand or pen on the paper while live streaming via p5.js.
Can this be done using machine learning?
Any idea how to implement this?
If I understand you right you want to detect where in the image the hand is a draw an overlay on this position right?
If so You can use YOLO more information to detect where the hand is.
There are some trained networks that you can download maybe they are good enough, maybe you have to train your own just for handy.
There are also some libery for yolo and JS https://github.com/ModelDepot/tfjs-yolo-tiny
You may not need to go the full ML object segmentation route.
If the paper's position and illumination are constant (or at least knowable) you could try some simple heuristic comparing the pixels in the current frame with a short history and using the most constant pixel values. There might be some lag as new parts of your drawing 'become constant' so maybe you could try some modification to the accumulation, such as if the pixel was white and is going black.

Train Model with same image in diferents orientation

It is a good a idea to train the model with the same images , but with diferents orientations? I a have a small set of images for the training thats the reason why Im trying to cover all the mobile camera-gallery user scenarios.
For example, the image: example.png with 3 copies; example90.png, example180.png and example.270.png with their diferents rotations. And also with diferents background colors, shadows, etc.
By the way, my test is to identify the type of animal.
Is that a good idea??
If you use Core ML with the Vision framework (and you probably should), Vision will automatically rotate the image so that "up" is really up. In that case it doesn't matter how the user held their camera when they took the picture (assuming the picture still has the EXIF data that describes its orientation).

How does e commerce Websites edit their product pic for showing on front

I am Wounding How these e commerce Website Edit their product pic for the front page
for example
This is an product image from flipkart.com an online store
This is Photo Taken by the camera
In Picture 1: There is Some blur effect how did they do it
There's no effect used. It is just a simple process.
The object is photographed against a white background.
The image is then 'masked' to isolate the object on it's own layer, either by hand or by a photoshop plugin. Manually, you can draw around the edge of the object using the lasso tool, then add a layer mask to remove the background.
Noise reduction, colour correction and other retouches are applied to this object layer. This is the blur effect that I think you are referring to. This may or may not be required, depending on image quality.
Shadows tend to be redrawn by hand using simple shapes with various blurs applied on a layer below the object. The simplest shadow is just a black ellipse with gaussian blur applied and layer opacity set to 20%.
A background colour is then applied, depending on how you will place the image. With images for E-Commerce this tends to be a white background.
The process for masking is varied and depends on your preferred tools, the complexity of the image and the shadow realism you want to achieve.
I recommend further research into 'Image Masking' to find the technique that suits you.

What effect or similar to it is shown in this video?

https://www.youtube.com/watch?v=CmftPUqQ0nE at 3:42 suddenly the photo becomes grainy or something. I can't replicate it since the uploader disabled the comments section and description is not helpful as well.
An effect like this can be achieved with a high pass filter. To do this, duplicate the layer (Ctrl+J, Cmd+J on a Mac). Put a high pass filter on the top layer (fairly small radius, i.e. less than around 2 pixels) and then set the blend mode of the layer to overlay.
EDIT: If you convert to a smart object before applying the high pass filter, you can play with the radius afterwards.

SpriteKit Layer Scratch Effect

I am creating a game in SpriteKit in which I have some SKShapeNodes. I want the SKShapeNodes to be hidden by color. For example:
initially as the scene loads, it should show as plain color.
The user scratches (touches and moves touch) to scratch the color and reveals the shapes behind the color. I will perform other actions on these shapes later
If any one suggests this link:
https://github.com/moqod/iOS-Scratch-n-See
then I have already gone from this but the problem is this that it is using two images one as front and other as background and it is in core graphics. I am completely beginner to core graphics and having difficulty to understand how to show color instead of image to scratch.
I understand that the question has been already asked but it is using images. I want to have only one color as layer which will be scratched.
Any one will please help me how to achieve scratch effect to scratch color and reveal other nodes behind?