How to use SPIto communicate with a AD7705 on raspberry PICO? - spi

I connect the CS to GND, so there's no cs set in code.
code
spi = SoftSPI(baudrate=100_000, polarity=1, phase=1, bits=8, sck=Pin(18), mosi=Pin(16), miso=Pin(19))
dr = machine.Pin(21, machine.Pin.IN, Pin.PULL_UP)
# 0010 0000 select clock register
spi.write(b'0x20')
utime.sleep_ms(10)
# 0000 1100 for 4.9152MHz
spi.write(b'0x0c')
utime.sleep_ms(10)
# 0001 0000 select setup register
spi.write(b'0x10')
utime.sleep_ms(10)
# 0100 0010 self-calibration and gain=1
spi.write(b'0x42')
utime.sleep_ms(10)
# 0000 1000 request read
spi.write(b'0x08')
utime.sleep_ms(10)
for i in range(10):
spi.write(b'0x38') # select data register
# pass if not ready.
while dr.value() != 0:
pass
data = spi.read(2)
print(int(data.hex(),16))
utime.sleep_ms(1)
The drdy is always 1,the code unable to read the correct data.
Please let me know where are errors in the code.
Thanks in advandce!

Related

Convert .Bin to .MP4 and transferring it using UART OpenMV H7 micropython

I am using an OpenMV H7 with the microPython language. I am currently trying to create a program that will begin recording after confirmation via Bluetooth using UART. Then, after recording is done, it will save and transfer the video back to the other device.
I have successfully recorded a video; however, I do not know how to transcode the .bin file into an mp4 or any other video file through the program and transfer it back to the terminal of my computer.
My question is, is this possible? And if so, how can it be done?
Here is a copy of the current version of my code.
# Video Capture - By: isaias - Sat Mar 5 2022
# Using OpenMV IDE and Micropython
# Processor: ARM STM32H743
import sensor, image, pyb, time
from pyb import UART
uart = UART(3, 115200, timeout_char=1000)
uart.write('Program Started.\n\r')
boolean = 0
record_time = 5000 # 10 seconds in milliseconds
while 1:
boolean = uart.read() #reads from terminal
if boolean: #when user enters 1 in terminal, begin recording for 10 seconds
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
clock = time.clock()
stream = image.ImageIO("/stream.bin", "w")
# Red LED on means we are capturing frames.
pyb.LED(1).on()
start = pyb.millis()
while pyb.elapsed_millis(start) < record_time:
clock.tick()
img = sensor.snapshot()
# Modify the image if you feel like here...
stream.write(img)
print(clock.fps())
stream.close()
break #Once done recording, leave while look
# Blue LED on means we are done.
pyb.LED(1).off()
pyb.LED(3).on()
#Convert file from .bin to readable .mp4 file and transfer it back to terminal to be downloaded
#We are unsure if this can be done via UART. If not, what other ways make this possible?

Why we use 1's complement instead of 2's complement when calculating checksums

When calculating UDP checksums I know that we complement the result and use it to check for errors. But I don't understand why we use 1's complement instead of 2's complement (as shown here). If there are no errors 1's complement results -1 (0xFFFF) and 2's complement results 0 (0x0000).
To check for correct transmission, receiver's CPU must first negate the result then look at the zero flag of ALU. Which costs 1 additional cycle for negation. If 2's complement was used the error checking would be done simply by looking at the zero flag.
That is because using 2's complement may give you a wrong result if the sender
and receiver machines have different endianness.
If we use the example:
0000 1111 1110 0000
1111 0000 0001 0000
the checksum with 2's complement calculated on a little-endian machine would be:
0000 0000 0001 0000
if we added our original data to this checksum on a big-endian machine we would get:
0000 0000 1111 1111
which would suggest that our checksum was wrong even though it was not. However, 1's compliments results are independent of the endianness of the machine so if we were to do the same thing with a 1's complement number our checksum would be:
0000 0000 0000 1111
which when added together with the data would get us:
1111 1111 1111 1111
which allows the short UDP checksum to work without requiring both the sender and receiver machines to have the same endianness.

How is this crc calculated correctly?

I'm looking for help. The chip I'm using via SPI (MAX22190) specifies:
CRC polynom: x5 + x4 + x2 + x0
CRC is calculated using the first 19 data bits padded with the 5-bit initial word 00111.
The 5-bit CRC result is then appended to the original data bits to create the 24-bit SPI data frame.
The CRC result I calculated with multiple tools is: 0x18
However, the chip shows an CRC error on this. It expects: 0x0F
Can anybody tell me where my calculations are going wrong?
My input data (19 data bits) is:
19-bit data:
0x04 0x00 0x00
0000 0100 0000 0000 000
24-bit, padded with init value:
0x38 0x20 0x00
0011 1000 0010 0000 0000 0000
=> Data sent by me: 0x38 0x20 0x18
=> Data expected by chip: 0x38 0x20 0x0F
The CRC algorithm is explained here.
I think your error comes from 00111 padding that must be padded on the right side instead on the left.

Scipy, Numpy: Audio classifier,Voice/Speech Activity Detection

I am writting a program to automatically classify recorded audio phone calls files (wav files) which contain atleast some Human Voice or not (only DTMF, Dialtones, ringtones, noise).
My first approach was implementing simple VAD (voice activity detector) using ZCR (zero crossing rate) & calculating Energy, but both of these paramters confuse DTMF, Dialtones with Voice. This techquie failed so I implemented a trivial method to calculate variance of FFT inbetween 200Hz and 300Hz. My numpy code is as follows
wavefft = np.abs(fft(frame))
n = len(frame)
fx = np.arange(0,fs,float(fs)/float(n))
stx = np.where(fx>=200)
stx = stx[0][0]
endx = np.where(fx>=300)
endx = endx[0][0]
return np.sqrt(np.var(wavefft[stx:endx]))/1000
This resulted in 60% accuracy.
Next, I tried implementing a machine learning based approach using SVM (Support Vector Machine) and MFCC (Mel-frequency cepstral coefficients). The results were totally incorrect, almost all samples were incorrectly marked. How should one train a SVM with MFCC feature vectors? My rough code using scikit-learn is as follows
[samplerate, sample] = wavfile.read ('profiles/noise.wav')
noiseProfile = MFCC(samplerate, sample)
[samplerate, sample] = wavfile.read ('profiles/ring.wav')
ringProfile = MFCC(samplerate, sample)
[samplerate, sample] = wavfile.read ('profiles/voice.wav')
voiceProfile = MFCC(samplerate, sample)
machineData = []
for noise in noiseProfile:
machineData.append(noise)
for voice in voiceProfile:
machineData.append(voice)
dataLabel = []
for i in range(0, len(noiseProfile)):
dataLabel.append (0)
for i in range(0, len(voiceProfile)):
dataLabel.append (1)
clf = svm.SVC()
clf.fit(machineData, dataLabel)
I want to know what alternative approach I could implement?
If you don't have to use scipy/numpy, you might checkout webrtvad, which is a Python wrapper around Google's excellent WebRTC Voice Activity Detection code. WebRTC uses Gaussian Mixture Models (GMMs), works well, and is very fast.
Here's an example of how you might use it:
import webrtcvad
# audio must be 16 bit PCM, at 8 KHz, 16 KHz or 32 KHz.
def audio_contains_voice(audio, sample_rate, aggressiveness=0, threshold=0.5):
# Frames must be 10, 20 or 30 ms.
frame_duration_ms = 30
# Assuming split_audio is a function that will split audio into
# frames of the correct size.
frames = split_audio(audio, sample_rate, frame_duration)
# aggressiveness tells the VAD how aggressively to filter out non-speech.
# 0 will have the most false-positives for speech, 3 the least.
vad = webrtc.Vad(aggressiveness)
num_voiced = len([f for f in frames if vad.is_voiced(f, sample_rate)])
return float(num_voiced) / len(frames) > threshold

What does the tx req option disable ack exactly in the xbee api?

The question, as mentioned in the title, is what causes the tx option field with the value 0x01 (disable ack) exactly. I assumed it disables the aps layer acknowledgement and the additional aps retries. But they occur in any way with aps acknowledgment disabled, too. The retry counter of the tx status frame counts still, sometimes till 60. I think this is a bit too much for the mac layer retries. Or there are also retries in nwk layer?
Regards Toby
the Option 0x01 on TX Request (API frame) doesn't disable the acknowledgment, it does disable the retries (up to 3). The following is a example of a TX Request frame with retries disable:
7E 00 0F 10 01 00 13 A1 00 40 AA D0 06 FF FE 00 01 04 78
For you to disable the acknowledgement you need to set 0x00 on Frame ID of TX Request. Here is an example:
7E 00 0F 10 00 00 13 A1 00 40 AA D0 06 FF FE 00 00 04 7A
I guess the Transmit Retry Count (from ZigBee Transmit Status frame) is related to CSMA-CA.