From Node Red to Pure Data with UDP - udp

I want to send UDP from Node Red to Pure Data. In NR, I have a UDP output node set to 127.0.0.1:3001 and a Pd netreceive object set to 3001 1 (the 1 sets the object to UDP rather than TCP). No message is received in the Pd patch.
To thicken the plot, a Node Red UDP output node set to 127.0.0.1:1881 does successfully send to Node Red UDP input node set to 1881. Also, a TCP object set to 127.0.0.1:3000 does connect with d netreceive object set to 3000, reported by the Pd console as "EOF on socket 12".
As the Node Red UDP output node is sending within the flow and Pd can report a TCP connection, I suspect there's something I have to do to format the message for PD. Any ideas?

netreceive expects messages to be FUDI-formatted. Basically, this means messages are terminated with a semicolon. Until a ';' is received, [netreceive] won't output anything.
Read more here: https://en.wikipedia.org/wiki/FUDI

Please check out my git repo for a solution.
https://github.com/sylatupa/Digital-Culture-Sound-Client/issues/1
Node Red was used to receive MQTT on particular topics.
I route the topics to the appropriate shell command that runs the locally installed pdsend executable.
I take the MQTT payload and pipe '|' two strings to the pdsend executable.
Left 3 is relieved by the execution of pdsend
The puredata patch receives and routes the 'Left 3' message
Node-Red is running on a raspberry pi, along side the mqtt broker.
I am testing with a MQTT client written in python.
See the github for the code and pure data patch, and maybe the node-red flow if that can be exported.
What is lacking is more complex messages, json encoded strings, and larger hierarchy topics, /pi/sensor1.

Related

Sending command to GPS device using gpsd python library

I use the gpsd python library in order to read and parse the NMEA string recieved from the gps device. I would like to send some command to gps in order to fine tune the measurement rate, report rate and so on.
It is possible using the gpsd library or I must send the command with in other way?
According to 'gpsd' manual:
To send a binary control string to a specified device, write to the
control socket a '&', followed by the device name, followed by '=',
followed by the control string in paired hex digits.
So if you have gpsd service running as gpsd -F /var/run/gpsd.sock you can use the following code to send commands to the gps device:
import socket
import sys
# Create a socket
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
# Connect the socket to the port where the GPSD is listening
gpsd_address = '/var/run/gpsd.sock'
sock.connect(gpsd_address)
# #BSSL 0x01<CR><LF> - Set NMEA Output Sentence (GGA only)
# cmd = '#BSSL 0x01' + '\r\n'
# #RST<CR><LF> - RST - Reset
cmd = '#RST' + '\r\n'
message = '&/dev/ttyUSB1='
cmd_hex = cmd.encode('utf-8').hex()
print ('cmd_hex {}'.format(cmd_hex))
# cmd_hex 405253540d0a
message += cmd_hex
bin_message = message.encode('utf-8')
print ("bin message {}".format(bin_message))
# bin message b'&/dev/ttyUSB1=405253540d0a'
sock.sendall(bin_message)
data = sock.recv(16)
print ('received {}'.format(data))
# received b'OK\n'
sock.close()
In my case I am sending #RST command followed by CR, LF symbols.

skb_tail_pointer(skb) not work correctly. And point the udp header tail wrongly

On linux kernerl version 3.2.48.
As an udp server in kernel mod, skb_tail_pointer(skb) is not correct, it point the udp header tail, lossing the payload size. udphdr->len is right.
It is strange.
it is possible that tail and data pointer points to the same location. skb_tail_pointer() return the starting tail address.

Redis mass insertion: protocol vs inline commands

For my task I need to load a bulk of data into Redis as soon as possible. It looks like this article is right about my case: https://redis.io/topics/mass-insert
The article starts from giving an example of using multiple inline SET commands with redis-cli. Then they proceed to generating Redis protocol and again use it with redis-cli. They don't explain the reasons or benefits of using Redis protocol.
Using of Redis protocol is a bit harder and it generates a bit more traffic. I wonder, what are the reasons to use Redis protocol rather than simple one-line commands? Probably despite the fact the data is larger, it is easier (and faster) for Redis to parse it?
Good point.
Only a small percentage of clients support non-blocking I/O, and not
all the clients are able to parse the replies in an efficient way in
order to maximize throughput. For all this reasons the preferred way
to mass import data into Redis is to generate a text file containing
the Redis protocol, in raw format, in order to call the commands
needed to insert the required data.
What I understood is that you emulate a client when you use Redis protocol directly, which would benefit from the highlighted points.
Based on the docs you provided, I tried these scripts:
test.rb
def gen_redis_proto(*cmd)
proto = ""
proto << "*"+cmd.length.to_s+"\r\n"
cmd.each{|arg|
proto << "$"+arg.to_s.bytesize.to_s+"\r\n"
proto << arg.to_s+"\r\n"
}
proto
end
(0...100000).each{|n|
STDOUT.write(gen_redis_proto("SET","Key#{n}","Value#{n}"))
}
test_no_protocol.rb
(0...100000).each{|n|
STDOUT.write("SET Key#{n} Value#{n}\r\n")
}
ruby test.rb > 100k_prot.txt
ruby test_no_protocol.rb > 100k_no_prot.txt
time cat 100k.txt | redis-cli --pipe
time cat 100k_no_prot.txt | redis-cli --pipe
I've got these results:
teixeira: ~/stackoverflow $ time cat 100k.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.168s
user 0m0.025s
sys 0m0.015s
(5 arquivo(s), 6,6Mb)
teixeira: ~/stackoverflow $ time cat 100k_no_prot.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.433s
user 0m0.026s
sys 0m0.012s

How to set up RSS hash fuction in XL710 to receive IPv4 flow type?

In DPKD the ETH_RSS_IPV4 data flow is not activated by default for XL710 Intel NIC. So, when you want to distribute packets among lcores you have to select other IPv4 data flows which are supported by XL710, namely ETH_RSS_FRAG_IPV4, ETH_RSS_NONFRAG_IPV4_TCP, ETH_RSS_NONFRAG_IPV4_UDP, ETH_RSS_NONFRAG_IPV4_SCTP, and ETH_RSS_NONFRAG_IPV4_OTHER. However you will face a silly problem when you are dealing with the fragmented IP packets. If you choose to go with ETH_RSS_FRAG_IPV4 and ETH_RSS_NONFRAG_IPV4_TCP options then some fragmented packets of a connection will fall into another queue, because they don't have L4 port numbers. If you exclude ETH_RSS_NONFRAG_IPV4_TCP function then the ETH_RSS_FRAG_IPV4 hash function will not be applied to non-fragmented packets and those packets will go to queue 0. All other combination of hash functions will not work. So, what should we do?
The behavior of XL710 is not compatible with the conventions in DPDK. So, you must directly work with the API offered by i40e driver in order to set up RSS for ETH_RSS_IPV4. As mentioned in the Intel® Ethernet Controller 710 Series Specification Update, page 18 (release Jan 2017):
Functions that require the Hash (RSS) filters on IPv4 packets should
set all IPv4 PCTYPEs in the PFQF_HENA / VFQF_HENA (PCTYPEs 31, 33…36)
Supported packet types (PCTYPE) are mentioned in Intel® Ethernet Controller 710 Series Datasheet pages 597 and 598 (release Jan 2017). You can see that there is no packet type defined for IPv4.
However there is a solution. The clue is to modify the input set for all required flow types (or packet types). Let's try it with testpmd tool which is provided by DPDK in app folder. After compiling DPDK and the app, run the testpmd application:
./app/test-pmd/testpmd -c ff -n 2 -w 0a:00.0 -w 0a:00.1 -- -i --rxq=4 --txq=4
We have two XL710 in our system. With the following commands you can configure XL710 to behave as you want to support IPv4 data flow.
port config all rss all
set_hash_input_set 0 ipv4-tcp src-ipv4 select
set_hash_input_set 0 ipv4-tcp dst-ipv4 add
set_hash_input_set 0 ipv4-udp src-ipv4 select
set_hash_input_set 0 ipv4-udp dst-ipv4 add
set_hash_input_set 1 ipv4-tcp src-ipv4 select
set_hash_input_set 1 ipv4-tcp dst-ipv4 add
set_hash_input_set 1 ipv4-udp src-ipv4 select
set_hash_input_set 1 ipv4-udp dst-ipv4 add
set_hash_global_config 0 default ipv4-frag enable
set_hash_global_config 0 default ipv4-tcp enable
set_hash_global_config 0 default ipv4-udp enable
set_hash_global_config 1 default ipv4-frag enable
set_hash_global_config 1 default ipv4-tcp enable
set_hash_global_config 1 default ipv4-udp enable
It selects the proper input set for TCP and UDP flow types by removing the L4 port section. The set_hash_global_config command enables the symmetric hash if you need it. By modifying the TCP input set, it behaves just like Frag IPv4 flow type and as a result all packets belonging to the same connection go to the same lcore.
Note that the default input set for Frag IPv4 and NonFIPv4, Other is IP4-S and IP4-D. So it doesn't need to be modified. Remember to modify all other IPv4 flows input set and symmetric quality of them.
You can find the API functions of those commands by looking at the source code of the testpmd application.

Delimiter string in Telit GL 868 Dual V3

I am using Telit modem GL 868 Dual V3. AT command AT#SCFG has 2 parameters- packet size to be used and data sending time-out for TCP. Is there any AT command which specifies that if any delimiter string is found, then that data will be sent on TCP ignoring the packet size and time-out?
There are commands #PADFWD, #PADCMD which serves the purpose of delimiter.
Below is a snapshot from AT commands reference guide for telit modem.