Ty and Relay application id's in free diameter source code - free-diameter

In packet capture we can see Ty and Relay application ids in CEA. These seem to be hardcoded in free diameter source code. What will be the impact if we remove these.

Related

Is there a protocol or well-defined procedure for instruments to send their measurement results to control PC's over GPIB?

With a control PC, I am addressing a R&S ESPI Receiver device to perform a frequency scan and return the measurement results back via BAT-EMC control software and a NI GPIB-USB controller in between. My target is to track the binary measurement data (Definite Length Block Data according to IEEE 488.2) sent to the control PC to understand how the device is deciding on the byte size of each binary block sent.
The trace shows that binary blocks are sent with no consistent pattern or rule!
E.g, running the same scan with the same frequency range and step twice may result in a different distribution of the measurement values' bytes on binary blocks (and possibly different total number of blocks sent), although the amount of data delivered is the same.
Any help to figure out how the device and control software are communicating the measurement data?
PS: The NI trace at the level of GPIB controller is not showing that the control software is specifying a byte size when querying for the next block, neither is the instrument sending this piece of info when it is issuing a service request so that it is queried for more available data by the software (according to the trace).
Make sure that you are giving enough time for the instrument to respond. Possibly you are sending commands from the PC which would assert the ATN line and interrupt the response. You should be able to configure the instrument to send one result. Configure the instrument as a listener and talker and set the instrument to send only one response per trigger. Then send the group execute trigger (GET) and read the results off the bus. When it’s done measure how long it took for that packet to get sent. If you are sending triggers before the full response you will be terminating the output stream. I suspect this because the streams are randomly different.
I’m just starting to learn GPIB so please write back what happened.

Constant carrier digital transmission in GNURadio with USRP

I'm trying to implement the UPLINK of a Ground Station controlling a small satellite. The idea is that the link should stay always active in between each transmitted telecommand. For this, I need to insert some DUMMY or IDLE sequence bytes such as 0xAA or similar.
I have found that some people already faced a similar issue and posted their questions here:
https://www.ruby-forum.com/t/constant-carrier-digital-transmission/163379
https://lists.gnu.org/archive/html/discuss-gnuradio/2016-08/msg00148.html
So far, the best I could achieve was to modify the EventStream Source block from https://github.com/osh/gr-eventstream in order to preload the vectors with my dummy sequence (i.e. 0xAA) instead of preloading them with zeroes. This is a general overview of the GNURadio graph I'm using:
GNURadio Flowgraph Picture
This solution however introduces a huge latency and the sent message does not appear at the output until a huge amount of time has expired (in the order of several seconds).
Is there a way of programming the USRP using GNURadio so that it constantly sends a fixed sequence which should only be interrupted when an incoming message is passed? I assume that the USRP has the ability of reading tagged streams in order to schedule transmissions. However, I'm not sure how to fit this in my specific application.
Thanks beforehand!
Joa
I believe this could be done using a TCP or UDP source block.
Your control information could be sent to the socket using TCP/UDP. GNU Radio would then collect and transmit the packets. Your master control program would then have to handle the IDLE stuffing but solving the problem external to GNU Radio is easier.
Your master control program would basically do the following:
1. tx control data as needed
2. if no control data ready before next packet must be sent send an IDLE packet

Serial device with no documentation, GPS board

I have a GPS circuit board from china. The only information I can find on this thing is :"amoj GPS 04C www.amoj.com"
It has a serial (DB9) connection and I would like to determine how to putty into it or something.
How can I determine what the port settings that are required to access this?
Pictures below:
Photos in Dropbox
The Jupiter TU60 serial interface is 9600 8N1 by default. The only sentence it will output automatically is the flash checksum message about a second after power up. Google the datasheet for the device and it will let you know about this.
To have it output the position and other information, you must command it to do so. There is a default set of commands that are active after power up. They begin with ## and are from the protocol used by Motorola. Refer to the M12+ Users Guide and Supplement (available online) for information on how to use these commands. I have been able to enter them from Realterm. The only tricky part is calculating the checksum. You can use most hex calculators to do that.
According to the datasheet, the unit goes into survey mode automatically and after about 24 hours goes into position hold. The 1PPS and 10KHz signals are valid to less than a microsecond after a few minutes after power up and to 50nS after a day. I have compared this to another standard I have to verify this. You can use the ##Ea command to get the status of the unit and the M12+ Manual will tell you how to decode it.
Look for $GP... messages at 4800 and 9600BPS as yegorich suggest. Common NMEA messages output by GPS devices are $GPGGA, $GPVTG, $GPRMC.. If you find that data coming out, use Google to look up NMEA 0183 sentence structure and you will have what you need...
I have the same board with the Navman jupiter T Tu60 GPS 1pps 10khz GPS Module on it. I just received my sma antenna and have hooked it up. I am using 12.6V power to the centre pin.
It outputs 1pps on the led with no signal, so that is not to be trusted. Mine is labeled 1pps and 10khz underneath the pcb but these are actually swapped! I put the 10KHz output on my dso and get a 10KHz square wave 50% duty cycle signal but there is ringing on the waveform rise so I have to set the trigger level to 0.8v to get the dso to register the 10KHz frequency. I suspect this may be because the output expects a load and is not seeing one. Now, was I using ac or dc coupling?
I too am getting nothing on the serial. I tried 9600, 4800 using putty on com1 (I have a nice old motherboard) and then tried reversing rx and tx but no luck. As of now I am checking out the serial signals with the dso to see if I can work out what is happenning. I suspect that these boards are rubbish, and useful as power supplies only.
It reads 10.0000 on my hp 5328a counter and sometimes reads 9.9999. It would be nice to be able to talk to the gps to see whether it has satellite lock.
Please let me know how you get on and if you find out any further info.
Brett VK6EZ.

Is it possible to programmatically power on/off the 3V3?

I have a Netduino Plus with at transeiver attached via SPI. I would like to reset the transiever every time the Netduino restarts. Is it possible to programmatically power on/off the 3V3 pin?
I would recommend using a FET (controlled by one of the I/O) pins to enable/disable 3V3 power to your transceiver. When you say transceiver, I think "more than a few mA" :)
BTW, we took this feedback into account with the new Shield Base module for Netduino Go. It has an integrated FET on both 3V3 and 5V power headers, so you could enable/disable power to your shield in code. Once the new Ethernet go!bus module ships and the Shield Base comes out of beta (soon), your solution can be redeployed to Netduino Go + Shield Base with few/no code changes.
Chris
Secret Labs LLC
Looking at the circuit diagram ( http://www.netduino.com/netduinoplus/schematic.pdf ), I can see only the Micro SD Card Slot having its power controlled programmatically. You could rig up a relay to control it (via a transistor, of course) instead, or if the transceiver uses less than 130mA (the current limit of the device shown: http://www.datasheetarchive.com/BSS84W-7-F-datasheet.html) you could copy the circuit from the Netduino Plus. Buying a relay shield looks like overkill, but you might have other uses for it.
Have you looked into resetting the transceiver programmatically instead of the brute-force method of power-cycling it?
Just to provide another view. You could use a transistor powered off the netduino RESET line, this will reset the device every time the netduino reboots. Or you can just link the transistor to a spare digital pin and power it in code..
What specific SPI device are you using? You mention that it's a transceiver but we could probably provide better information if we know the exact part number. If your device requires less than 8mA the Netduino Plus specs seem to indicate that one option could be using a digital output pin as the power source.
Unfortunately Secret Labs don't use exactly the language I'd expect and call out the sink and source current maximums so I would contact them directly first to see if you risk blowing your chip. I'll see if I can get an answer from them and amend this post if/when I do.
Update: Sink and source current is the same on the Netduino. See my post on their forums about sink vs. source current for a more in depth explanation. So, if your device can run off of just a few milliamps you should be able to use a digital I/O pin to power it.
Also, a lot of devices have enable pins. You can usually reset them with that line instead of pulling the power if that helps. Sometimes with flaky hardware it is better to pull the power though.

Game Network physics Sync with Objective C and Box2d

I have a table hockey game for iphone and now i'm doing the multiplayer part of it. I decided that the iphone that starts the match is the server.
The physics are running on the server and on the client, so the client look keeps smooth e not 'jumpy', since its a really fast game.
The server sends constant messages to the client, so the client can adjust its position and velocity. The problem is that sometimes the client jumps back on position because of the delay.
I've done clock sync on the client and the server, so i can compensate the X and Y position, through the clock difference and the velocity the server sended. The problem is that its looks kind of jumpy. How can i synchronize this thing? I've been trying all sort of stuff but it doesn't seen to work.
Thank you.
You're describing a well-known problem with client/server game state synchronization. Quoting this article:
The most complicated part of client side prediction is handling the
correction from the server. This is difficult, because the corrections
from the server arrive in the past due to client/server communication
latency. We need to apply this correction in the past, then calculate
the resulting corrected position at present time on the client.
Have a look at that article, particularly the section titled "Client side prediction".