I am trying to use WebUSB to read GPS coordinates from a USB-connected GPS receiver from within javascript.
I have been able to connect to the receiver; however, I am unsure how to use WebUSB to access the NMEA messages.
So far, I have the following proof-of-concept code:
<html>
<head>
<title>WebUSB Serial Sample Application</title>
</head>
<body>
Connect<br>
Send Data<br>
Read Data<br>
<script>
let y;
let device;
var connectButton = document.getElementById('click')
connectButton.addEventListener('click', function() {
navigator.usb.requestDevice({
filters: [{}]
}).then((selectedDevice) => {
device = selectedDevice;
return device.open()
.then(() => device.selectConfiguration(1))
.then(() => device.claimInterface(device.configuration.interfaces[0].interfaceNumber))
.then(() => device.selectAlternateInterface(device.configuration.interfaces[0].interfaceNumber, 0))
.then(() => {
y = device;
})
});
})
var sendButton = document.getElementById('send')
var sendDecoder = new TextDecoder()
sendButton.addEventListener('click', async () => {
y.controlTransferOut({
requestType: 'class',
recipient: 'interface',
request: 0x22,
value: 0x01,
index: 0x00});
y.controlTransferIn({
requestType: 'standard',
recipient: 'device',
request: 0x06,
value: 0x0302,
index: 0x409
}, 255)
.then(result => {
let decoder = new TextDecoder()
console.log(sendDecoder.decode(result.data));
console.log('sent req');
}).catch(error => {
console.log(error);
});
});
</script>
</body>
</html>
The transferOut statement allows me to read the Vendor name from the device. So, I know I'm connected and can communicate with it. I just don't know what specific command(s) are needed to access the lat/lon from the device.
UPDATE:
Below is a snapshot of the device config info
There are a couple of layers you'll need to understand in order to accomplish your goal. The first is the USB interface implemented by the device.
In the example code you've posted there are two calls which send commands to the device. The first is a controlTransferOut() which sends the standard USB CDC-ACM SET_CONTROL_LINE_STATE request (0x22) to the device to enable the DTR signal (0x01). If your device implements the USB CDC-ACM protocol then this is the write thing to do as it signals to the device that the host software is ready to receive data. If your device doesn't implement this protocol then the command will be ignored or fail. You're already claiming the first interface, to understand USB protocol that interface implements you should check device.configuration.interfaces[0].alternates[0].interfaceClass and device.configuration.interfaces[1].alternates[0].interfaceClass. A USB CDC-ACM device will have one interface with class 2 and one with class 10. Class 2 is for the control interface while class 10 is the data interface. If your device doesn't have these two interfaces then it probably isn't implementing the USB CDC-ACM protocol and you'll have to figure out what protocol it uses first.
The second call you make is a controlTransferIn() which sends the standard USB GET_DESCRIPTOR request (0x06) to read a string descriptor from the device. Passing 0x0302 asks it to read the string descriptor (type 3) at index 2. This will work for any USB device (assuming the index is right) and so doesn't tell you whether you've figured out what protocol the interface supports.
Assuming this is a USB CDC-ACM interface then you'll want to look at device.configuration.interfaces[1].alternates[0].endpoints and figure out the endpoint numbers for the IN and OUT endpoints. These are what you'll pass to transferIn() and transferOut() to send and receive data from the device.
Once you have all that figured out you'll need to figure out how to get the device to start sending NMEA messages. If you are lucky then in its default mode it will automatically send them and you can just start calling transferIn() to receive them. Otherwise you will have to figure out what command to send to the device to put it in a the right mode. If you have and documentation for the device or example code written in other languages that supports this device then that will be very helpful for figuring this out.
I have finally solved this problem. The answer (for me, anyway) was to purchase a different GPS receiver that implemented the CDC-ACM interface since there seems to be more examples and better docs for this protocol.
The following proof-of-concept code is working:
<html>
<head>
<title>WebUSB Serial Sample Application</title>
</head>
<body>
Connect<br>
Read Data<br>
<script>
let y;
let device;
var connectButton = document.getElementById('click')
connectButton.addEventListener('click', function() {
navigator.usb.requestDevice({
filters: [{}]
})
.then((selectedDevice) => {
device = selectedDevice;
return device.open();
})
.then(() => device.selectConfiguration(1))
.then(() => device.claimInterface(device.configuration.interfaces[0].interfaceNumber))
.then(() => device.claimInterface(device.configuration.interfaces[1].interfaceNumber))
.then(() => device.selectAlternateInterface(device.configuration.interfaces[0].interfaceNumber, 0))
.then(() => device.selectAlternateInterface(device.configuration.interfaces[1].interfaceNumber, 0))
.then(() => {
y = device;
})
})
var readButton = document.getElementById('read')
var readDecoder = new TextDecoder()
var readLoop = () => {
y.transferIn(2,64)
.then(result => {
let decoder = new TextDecoder()
console.log(readDecoder.decode(result.data.buffer));
readLoop();
}).catch(error => {
console.log(error);
});
};
readButton.addEventListener('click', async () => {
readLoop();
});
</script>
</body>
</html>
Related
I am trying to do some processing on the server side, which I do not want to be viewable on the client side.
I have successfully tried using either fetch or asyncData to populate the state, but I do not want the process followed to be available on the browser.
For example:
<template>
// ...
</template>
<script>
import ...
export default {
layout: 'layout1',
name: 'Name',
components: { ... },
data: () => ({ ... }),
computed: { ... },
async asyncData({ store }) {
const news = await axios.get(
'https://newsurl.xml'
).then(feed =>
// parse the feed and do some very secret stuff with it
// including sha256 with a salt encryption
)
store.commit('news/ASSIGN_NEWS', news)
}
}
</script>
I want the code in asyncData (or in fetch) not to be visible on the client side.
Any suggestion will be appreciated.
There are several questions like this one already, one of which I've answered here and here.
The TLDR being that if you want to make something on server only, you probably don't even want it part of a .vue file from the beginning. Using a backend proxy could also be useful (as stated above), especially if you could benefit from caching or reduce bandwidth overall.
You can use onServerPrefetch hook.
onServerPrefetch(async () => {
const news = await axios.get(
'https://newsurl.xml'
).then(feed =>
// parse the feed and do some very secret stuff with it
// including sha256 with a salt encryption
)
this.$store.commit('news/ASSIGN_NEWS', news)
})
I am developing an app in ReactNative offline.
One of the functionalities is to use bluetooth to synchronize the data (that the app was collecting) with other devices that use the same app.
I started to develop this task with the react-native-ble-manager library, i can connect from device A to device B, but I don't understand how to listen to the incoming connection in device B. I need to know this to show a certain view.
can anybody help me?
I am using the correct library?
Thanks!
You can't use only the react-native-ble-manager for this project. The library states in its readme that it is based on cordova-plugin-ble-central, which can only act as a central. For a BLE connection you need a central and a peripheral.
Take a look at react-native-peripheral. It allows you to act as a perihperal, create a characteristic with some data, add it to a service and register it so other devices can find it. This is their usage example:
import Peripheral, { Service, Characteristic } from 'react-native-peripheral'
Peripheral.onStateChanged(state => {
// wait until Bluetooth is ready
if (state === 'poweredOn') {
// first, define a characteristic with a value
const ch = new Characteristic({
uuid: '...',
value: '...', // Base64-encoded string
properties: ['read', 'write'],
permissions: ['readable', 'writeable'],
})
// add the characteristic to a service
const service = new Service({
uuid: '...',
characteristics: [ch],
})
// register GATT services that your device provides
Peripheral.addService(service).then(() => {
// start advertising to make your device discoverable
Peripheral.startAdvertising({
name: 'My BLE device',
serviceUuids: ['...'],
})
})
}
})
There is a section about dynamic values where they explain how you can use the onReadRequest and onWriteRequest callbacks to listen to read and write operations on the peripheral and even return dynamic values on each read request:
new Characteristic({
uuid: '...',
properties: ['read', 'write'],
permissions: ['readable', 'writeable'],
onReadRequest: async (offset?: number) => {
const value = '...' // calculate the value
return value // you can also return a promise
},
onWriteRequest: async (value: string, offset?: number) => {
// store or do something with the value
this.value = value
},
})
Currently, i am working on a webRTC project where you can give call and receive call.i also want to add screen share functionality to it.
can anyone provide me a good documentation link?
i am currently following the official documentation of peerJS.
i was able to do audio-video calling but stuck on the screen sharing part.
Help Me!
You need to get stream just like you do with getUserMedia and then you give that stream to PeerJS.
It should be something like this:
var displayMediaOptions = {
video: {
cursor: "always"
},
audio: false
};
navigator.mediaDevices.getDisplayMedia(displayMediaOptions)
.then(function (stream) {
// add this stream to your peer
});
I'm working with and learning about WebRTC. From what I've read, I think the solution here probably hinges on getDisplayMedia. That's also what this React, Node and peerJS tutorial suggests (though I haven't tried it myself yet).
let screenShare = document.getElementById('shareScreen');
screenShare.addEventListener('click', async () => {
captureStream = await navigator.mediaDevices.getDisplayMedia({
audio: true,
video: { mediaSource: "screen" }
});
//Instead of adminId, pass peerId who will taking captureStream in call
myPeer.call(adminId, captureStream);
})
I have this USB-to-GSM Serial-GPRS-SIM800C module and I have successfully been able to send AT commands to it and do stuffs, but what I really wanted was Text to speech capabilities, I was able to generate an AMR audio file, upload it unto the module's internal memory and play it whenever some one calls.
But the message heard by caller's is going to be dynamic and TTS will run realtime, so the uploading process of the audio file into the module will cause undesirable delay, is there any way I could stream some audio through the module?
Thanks.
Here's what I have had to do.
Start call (ATDxxxxxxxxxxx;)
Set mode (AT+DTAM=2)
Start recording (AT+CREC=1,1,0)
Speak what I want to playback into microphone
5.Stop recording (AT+CREC=2)
Hang up (ATH)
Now I can playback what I recorded using the following
Start call (ATDxxxxxxxxxxx;)
Set mode (AT+DTAM=2)
Start playback (AT+CREC=4,1,0,80)
Hang up (ATH)
No idea how to do this dynamically or even upload an *.amr file.
Would be grateful if you could share what commands you used to see if there's any way to improve.
To answer #anothersanj
I'm using serialport-gsm to make things easier.
This is how I go about it:
modem.executeCommand('AT+FSMKDIR=C:\\status\\',(result) => { log.debug(result); });
//reading the audio file from your computer with nodejs fs module
fs.readFile('tts2.amr', function(err, amr_data) {
if(!err) {
let fsize= fs.statSync('tts2.amr').size;
log.debug(fsize);
//creating the file on the GSM module's memory
modem.executeCommand('AT+FSCREATE=C:\\stats\\tts2.amr',(result) => { log.debug(result); });
//writing the file on the GSM module's memory
modem.executeCommand('AT+FSWRITE=C:\\stats\\tts2.amr,0,'+fsize+',100',(result) => {
modem.port.write(amr_data);
});
//Display file list on specified path (like ls command)
modem.executeCommand('AT+FSLS=C:\\stats',(result) => { log.debug(result); });
}
});
And for playing the file whenever someone calls you do:
//playing the file on incoming call
modem.on('onNewIncomingCall', (result) => {
log.debug(result);
modem.executeCommand('ATA',(result) => { log.debug(result); });
modem.executeCommand('AT+CMEDPLAY=1,\"C:\\stats\\tts2.amr\",0,100',(result) => { log.debug(result); });
modem.executeCommand('AT+DDET=1',(result) => { log.debug(result); });
});
I am building a simple WebRTC app with OpenTok.
I need to be able to select camera, audio input and audio output.
Currently that doesn't seem easily possible.
See opentok-hardware-setup
https://github.com/opentok/opentok-hardware-setup.js/issues/18
I am loading OpenTok in my index.html file
and opentok-hardware-setup.js.
All looks great and I can select microphone and camera BUT not the speaker out aka audiooutput.
<script src="https://static.opentok.com/v2/js/opentok.min.js"></script>
From the console, I tried
OT.getDevices((err, devices) => { console.debug(devices)});
and observed that you can't get the audioOutput
(4) [{…}, {…}, {…}, {…}]
0: {deviceId: "default", label: "Default - Built-in Microphone", kind: "audioInput"}
1: {deviceId: "b183634b059298f3692aa7e5871e6a463127701e21e320742c48bda99acdf925", label: "Built-in Microphone", kind: "audioInput"}
2: {deviceId: "4b441035a4db3c858c65c30eabe043ae1967407b3cc934ccfb332f0f6e33a029", label: "Built-in Output", kind: "audioInput"}
3: {deviceId: "05415e116b36584f848faeef039cd06e5290dde2e55db6895c19c8be3b880d91", label: "FaceTime HD Camera", kind: "videoInput"}
length
:4 __proto__:Array(0)
whereas you can get them using navigator.mediaDevices.enumerateDevices()
Any pointers?
Disclosure, I'm an employee at TokBox :). OpenTok does not currently provide a way to specify the audio output device. This is still an experimental API and only works in Chrome. When the API is standardised and has wider browser support we will make it easier.
In the meantime, it's pretty easy to do this using native WebRTC. There is a good sample for this at https://webrtc.github.io/samples/src/content/devices/multi/ the source code can be found at https://github.com/webrtc/samples/blob/gh-pages/src/content/devices/multi/js/main.js
In summary you use the enumerateDevices method as you found. Then you use the setSinkId() method on the video element https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/setSinkId
You can get access to the videoElement by listening to the videoElementCreated event on the subscriber like so:
subscriber.on('videoElementCreated', (event) => {
if (typeof event.element.sinkId !== 'undefined') {
event.element.setSinkId(deviceId)
.then(() => {
console.log('successfully set the audio output device');
})
.catch((err) => {
console.error('Failed to set the audio output device ', err);
});
} else {
console.warn('device does not support setting the audio output');
}
});
So,
The answer gave by #Adam Ullman is not valid anymore since there is a separate audio element created alongside the video element preventing us to use the setSinkId method of the video element.
I found a solution consisting in finding the audio element from the video one and using its own setSinkId.
Code:
const subscriber_videoElementCreated = async event => {
const videoElem = event.element;
const audioElem = Array.from(videoElem.parentNode.childNodes).find(child => child.tagName === 'AUDIO');
if (typeof audioElem.sinkId !== 'undefined') {
try {
await audioElem.setSinkId(deviceId);
} catch (err) {
console.log('Could not update speaker ', err);
}
}
};
OpenTok (now Vonage) now provides an API for doing exactly this in 2.22.
It is not supported on all browsers (Safari), but for browsers which support setSinkID, there is now a uniform API which wraps the functionality handily.
https://tokbox.com/developer/guides/audio-video/js/#setAudioOutput