Minimum frame size in CSMA/CD - frame

May I ask, why the minimum frame size = bandwith * round trip time?
Why could it not be smaller than the bandwith * rtt?
Thenk you very much

If the frame size is any smaller than bandwidth*RTT, the frame may have already been finished transmitting by the time the collision warning reaches the transmitting node, so the frame cannot be retransmitted. Hope that makes sense! :)

Related

Simulate Camera in Numpy

I have the task to simulate a camera with a full well capacity of 10.000 Photons per sensor element
in numpy. My first Idea was to do it like that:
camera = np.random.normal(0.0,1/10000,np.shape(img))
Imgwithnoise= img+camera
but it hardly shows an effect.
Has someone an idea how to do it?
From what I interpret from your question, if each physical pixel of the sensor has a 10,000 photon limit, this points to the brightest a digital pixel can be on your image. Similarly, 0 incident photons make the darkest pixels of the image.
You have to create a map from the physical sensor to the digital image. For the sake of simplicity, let's say we work with a grayscale image.
Your first task is to fix the colour bit-depth of the image. That is to say, is your image an 8-bit colour image? (Which usually is the case) If so, the brightest pixel has a brightness value = 255 (= 28 - 1, for 8 bits.) The darkest pixel is always chosen to have a value 0.
So you'd have to map from the range 0 --> 10,000 (sensor) to 0 --> 255 (image). The most natural idea would be to do a linear map (i.e. every pixel of the image is obtained by the same multiplicative factor from every pixel of the sensor), but to correctly interpret (according to the human eye) the brightness produced by n incident photons, often different transfer functions are used.
A transfer function in a simplified version is just a mathematical function doing this map - logarithmic TFs are quite common.
Also, since it seems like you're generating noise, it is unwise and conceptually wrong to add camera itself to the image img. What you should do, is fix a noise threshold first - this can correspond to the maximum number of photons that can affect a pixel reading as the maximum noise value. Then you generate random numbers (according to some distribution, if so required) in the range 0 --> noise_threshold. Finally, you use the map created earlier to add this noise to the image array.
Hope this helps and is in tune with what you wish to do. Cheers!

How to display stimulus accurately using frames

I have a bit of code attached below that display a stimulus for a certain number of frames.
from psychopy import visual, logging, event, core
#create a window to draw in
myWin = visual.Window((600,600), allowGUI=False, blendMode='add', useFBO=True)
logging.console.setLevel(logging.DEBUG)
#INITIALISE SOME STIMULI
grating1 = visual.GratingStim(myWin,mask="gauss",
color=[1.0,1.0,1.0],contrast=0.5,
size=(1.0,1.0), sf=(4,0), ori = 45,
autoLog=False)#this stim changes too much for autologging to be useful
grating2 = visual.GratingStim(myWin,mask="gauss",
color=[1.0,1.0,1.0],opacity=0.5,
size=(1.0,1.0), sf=(4,0), ori = -45,
autoLog=False)#this stim changes too much for autologging to be useful
for frameN in range(300):
grating1.draw()
grating2.draw()
win.flip()
myWin.flip() #update the screen
At 60Hz frame refresh rate, 300 frames should be approximately 5 seconds. When I test it out - it is definitely longer then that.
In my experiment, I need the number of frames to be as few as 2 frames - and it seems that my code isn't going to be displaying that accurately.
I was wondering if there is a better way to display the number of frames? Such as using the grating1.draw() and grating1.draw() before the for-loop maybe?
I appreciate any help - thanks!
The timing discrepancy might be due to the frame rate not being exactly 60Hz. Try using myWin.getActualFrameRate() (PsychoPy documentation) to get the actual frame rate. Multiplying the real frame rate by 5.0 seconds should theoretically allow you to draw for exactly 5.0 seconds.
frame_rate = myWin.getActualFrameRate(nIdentical=60, nMaxFrames=100,
nWarmUpFrames=10, threshold=1)
# The number of frames you want to display is equal to
# the product of frame-rate and the number of seconds to display
# the stimulus.
# n_frames = frame_rate * n_seconds
for frameN in range(frame_rate*5.0):
grating1.draw()
grating2.draw()
myWin.flip()

How to handle SpriteKit frame rate drop when multiplying by delta time

I use a common method of multiplying a player velocity in delta time in order to create gravitation effect as follows:
CGPoint gravity = CGPointMake(0, kGravity);
CGPoint gravityStep = CGPointMultiplyScalar(gravity, _dt);
_playerVelocity = CGPointAdd(_playerVelocity, gravityStep);
CGPoint velocityStep = CGPointMultiplyScalar(_playerVelocity, _dt);
_player.position = CGPointAdd(_player.position, velocityStep);
The problem is that when the frame rate drops (for example - when getting a notification not related to the game) the player misses the jump, I am guessing due to missed updates, and falls.
Is there a proper way to deal with this usecase?
You are factoring in delta time twice! This is not a framerate drop but your nodes moving faster than they should. Removing the 2nd line should fix it.

Algorithm or math to project a GIF file size?

I have a user's animated gif file that is about 10mb. I'd like to allow users to upload it and let me host it on my server, but I'd like to rescale it to fit a maximum file size of 5mb to conserve bandwidth from hotlinking.
I have a basic method right now that determines a targetWidth and targetHeight based on pixel surface area.
It works well enough:
CGFloat aspectRatio = originalHeight / originalWidth;
CGFloat reductionFactor = desiredFileSize / originalFileSize;
CGFloat targetSurfaceArea = originalSurfaceArea * reductionFactor;
int targetHeight = targetSurfaceArea / sqrt(targetSurfaceArea/aspectRatio);
int targetWidth = targetSurfaceArea / targetHeight;
Its fairly accurate, ex. results: a 27mb file will turn into 3.3mb, or a 13.9mb will turn into 5.5mb.
I would like to tune this accuracy to get much closer to 5mb, and I was hoping someone would know a bit more about how gif color / frame count could better be factored into this algorithm. Thanks
Not sure you're going to find an easy way to do this. Projecting the compressed size of a file without running the compression algorithm seems to me to be non deterministic.
However, if you have plenty of compute cycles you could use an approximation based approach. Use the algorithm above to give you a first resize of the image. If the resulting file is > than 5Mb, half the resize percentage and try again. If < 5Mb add 50% to the resize percentage and try again. Repeat until you get sufficiently close to 5Mb.
So, for example
50% = 3.3Mb, so try halfway between 50 and 100
75% = 6.1Mb, so try halfway between 75 and 50
62.5% = 4.7Mb so try halfway between 62.5 and 75
etc

FFT Size in jTransforms

I need to calculate the FFT of audiodata in an Android Project and I use jTransforms to achieve this.
The samples of the audiodata are a few seconds long and are recorded with a samplerate of 11025 Hertz.
I am not sure how to set the length of the FFT in jTransforms.
I do not really need high frequency resolution, so a size of 1024 would be enough.
But from what I have understood learning about the FFT, if I decrease the FFT size F and use a sample with a lenght of N > F, only the first F values of the original sample are transformed.
Is that true or did I understand something wrong?
If it is true, is there an efficient way to tranform the whole signal and decreasing the FFT-Size afterwards?
I need this to classify different signals using Support Vector Machines, and FFT-Sizes > 1024 would give me too much features as output, so I would have to reduce the result of the FFT to a more compact vector.
If you only want the FFT magnitude results, then use the FFT repeatedly on successive 1024 chunk lengths of data, and vector sum all the successive magnitude results to get an estimate for the entire much longer signal.
See Welch's Method on estimating spectral density for an explanation of why this might be a useful technique.
Im not familiar with the jTransform library, but do you really set the size of the transform before calculating it? Amplitude values of the time-domain signal and the sampling frequency (11.025 kHz) is enough to calculate the FFT (note that the FFT assumes constant sampling rate)
The resolution in frequency domain will be determined by Nyquist's theorem; the maximum resolvable frequency in your signal will be equal to half your sampling rate. In other words, sampling your signal with 11.025 kHz, you can expect your frequency graph to contain frequency values (and corresponding amplitudes) between 0 Hz - 5.5125 kHz.
UPDATE:
The resolution of the FFT (the narrowness of the frequency bins) will increase/improve if your input signal is longer, thus 1024 samples might not be a long sequence enough if you need to distinguish between very small changes in frequency. If thats not a problem for you application, and the nature of your data is not variying quickly, and you have the processing time, then taking an average of 3-4 FFT estimates will greatly reduce noise and improve estimates.