I'm trying to read on /dev/ttyUSB0 but I'm receiving corrupted data, I already tried cat, dd and libusb bindings for Node.js, all have the same result.
The device attached to that USB port sends a constant flow of data and I think that the OS might be the problem.
I'm using the data flow to build charts. I can see that corrupted data in the charts as a regular and sequential errors.
I'm using Raspbian, is there anything that I can do to tell the OS to execute just my program and nothing more?
I presume you have verified the port is configured to the correct baud/ flow/ etc settings?
stty -F /dev/ttyUSB0 -a
The solution we pick is to use libusb instead. Yes it involves developing user land applications to cope with. We also see some issues with /dev/ttyUSBx. The driver path is too long, it involves too many individual drivers, any problem in any of those drivers would cause a problem.
I had a very similar problem and neither minicom nor kermit worked and I'm almost sure that it was because of non printable chars.
Using pyserial in python everything worked like a charm.
It's extremely easy to use. As an example:
ser = serial.Serial('/dev/ttyUSB0', 9600, timeout=1)
x= ser.read() # read one byte
s = ser.read(10) # read up to ten bytes (timeout)
line = ser.readline() # read a '\n' terminated line
ser.close()
You can find more examples on the documentation.
Related
Trying to run a few simple tasks via celery. The header of the workerfile looks like
from celery import Celery, group
from time import sleep
celery = Celery('workerprocess', broker='redis://localhost:6379/0', backend='redis://localhost:6379/0')
After passing the jobs, I am trying to read the results like this.
jobresult=group(signatureList).apply_async()
while not jobresult.ready():sleep(30) #Line 6
The code is running perfectly in my desktop. The configuration is Python 3.6.7 and 4.15.0-20-generic #21-Ubuntu SMP.
When I try to run the same thing on my staging server (with the worker node running there too in the background), #Line 6 above throws the following error.
kombu.exceptions.DecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
From the log, it appears the task is sent to the redis queue, is executed correctly, but the main process can not perform any operation on the jobresult variable to retrieve the output of the calculations. I have checked by pinging the redis-cli, and it echoes PONG. The server configuration (the ones which I think are relevant) are Python 3.5.2 and 4.4.0-89-generic #112-Ubuntu.
Since the task is already running in my local desktop, I guess it is a matter of dependency. But the error does not give any clue about what other libraries should I install, whether using PIP or apt-get. I looked up, Kombu (don't know what it does, but guess something important) and it is already installed. So how
to resolve this?
This is the output from the redis-cli from a comment to the question. I am not sure what it means, though.
127.0.0.1:6379> lrange celery 0 0
(empty list or set)
127.0.0.1:6379>
If your workerprocess is supposed to return utf8-encoded responses then this likely is not a missing dependency, but either a different library version or something wrong with the celery workerprocess as set up on your server.
There is a known problem with celery returning error messages that are not compatible with utf-8 encoding, though the specs say they should be. There are also multiple documented bugs in older versions (fixed in newer versions) that used wrong or mismatched encodings, especially in handling json.
The unfortunate result is that you are seeing a report complaining that the first character of the result (0x80) is invalid, rather than seeing the actual error (or mis-encoded data) being returned.
To debug this, activate enough logging to see the actual data or error result being returned, and work from there.
Alternatively, you may be able to treat the inbound data as binary, rather than utf8, which would allow bytes to come through unscathed. They still won't be readable as utf8 or ascii characters, but at least you will receive them.
You can see a number of ways in which other folks have handled unexpectedly non-utf8 data here.
One possibility is that default encoding for python is different between your local and server.
You can get default encoding by doing
python -c 'import sys; print(sys.getdefaultencoding())'
on your local and server.
If both are not same, most general way to change default encoding is the environment variable
export PYTHONIOENCODING=new_encoding
But it depends on the environment.
Below link explains more ideas to change encoding.
c.f. Changing default encoding of Python?
Most use cases I've seen with xperf involve using xperfview on the same computer. A remote record and play back for me don't seem to work well. Symbols are not resolved correctly. Is there a known issue with remote record and local play with xperf/xperfview?
Why do you try remote connection? if you use xperf -d to stop logging the ETL contains all metadata, so that the symbols can be loaded from any PC you want. Copy it from PC A to PC B and view the ETL there.
Now that the 8.1 version of WPT is out, the recommended way to record traces is not with xperf.exe but with wprui.exe. This makes trace recording much simpler and much less error prone. See this blog post for details:
http://randomascii.wordpress.com/2013/04/20/xperf-basics-recording-a-trace-the-easy-way/
And yes, you absolutely should be able to record traces on one machine and view them on another.
I am trying to capture the wlan samples from gnuradio-companion. I have configured the USRP Soource with the following :
Ch0 Gain = 50dB
device addr : 192.168.10.3
Center Frequency : 2.437GHz
Sample Rate : 11M
But , when I execute the model, I receive the Overflow message at the console. Any hints whether the configuration is proper for collecting the samples ?
Here is the attached model:
An 'overflow' indicates that your computer is receiving data faster than it is capable of processing it.
I realize this is an old question, but for anyone else that looks at this question hoping to find something useful, remember that your computer must process the samples. Here, you are dumping the samples into two graphical sinks, and also writing to your hard disk at 11 MSps (11e6 * 32bits = 352 MBits / sec).
If you are on a machine that can't keep up with that, the overflows would be expected!
You do not state if you run as root or not, but for me running as root solved the problem for me. Running as root gives you more privelegies and so you can use more features of the processor.
I'm working with a GPS module that is transferring data to my mac over a serial RS232-to-USB interface. I've written a objC program that takes the raw data and converts it into meaningful information.
Using a program called goSerial, I'm able to log all incoming data into a text file. I have been able to make my program read the text file and process data line by line.
I would like this procedure to happen in real time i.e. as soon as the data is received, it gets logged into the text file and my program reads it. The first part of this happens automatically that is the text file is being constantly appended (when not open). Is it possible to monitor a text file for appended data and only read new lines? Also, will doing this affect the ability of new incoming data to be saved?
Thanks!!!
(Also, if anyone knows how I may send serial data directly to Xcode, please let me know!)
I'm not sure how the serial-to-USB affects things but traditionally, unix accesses serial devices using the Device-File Mechanism which treats the input from the device as a file to be read. You would use NSFileHandle to read the file from Cocoa/Foundation. You probably want to checkout the IORegistryExplorer app to see how your device shows up.
You can use a kqueue (perhaps with a wrapper such as UKKQueue) to watch the file for changes.
You should be able to create a unix domain socket, which you can then have your goSerial application open (as it looks like a normal file on the fs)
And then read from the other end, linewise in your application. This is probably the easiest way, or alternately have a look at the source of tail in GNU's coreutils, specifically it's -f function (although thinking more you'll probably want to look at how the FreeBSD implementation works, as I believe that the GNU version uses some linux specific callback)
Is there a way to force a Samba process to close a given file without killing it?
Samba opens a process for each client connection, and sometimes I see it holds open files far longer than needed. Usually i just kill the process, and the (windows) client will reopen it the next time it access the share; but sometimes it's actively reading other file for a long time, and i'd like to just 'kill' one file, and not the whole connection.
edit: I've tried the 'net rpc file close ', but doesn't seem to work. Anybody knows why?
edit: this is the best mention i've found of something similar. It seems to be a problem on the win32 client, something that microsoft servers have a workaround for; but Samba doesn't. I wish the net rpc file close <fileid> command worked, I'll keep trying to find out why. I'm accepting LuckyLindy's answer, even if it didn't solve the problem, because it's the only useful procedure in this case.
This happens all the time on our systems, particularly when connecting to Samba from a Win98 machine. We follow these steps to solve it (which are probably similar to yours):
See which computer is using the file (i.e. lsof|grep -i <file_name>)
Try to open that file from the offending computer, or see if a process is hiding in task manager that we can close
If no luck, have the user exit any important network programs
Kill the user's Samba process from linux (i.e. kill -9 <pid>)
I wish there was a better way!
I am creating a new answer, since my first answer really just contained more questions, and really was not a whole lot of help.
After doing a bit of searching, I have not been able to find any current open bugs for the latest version of Samba, please check out the Samba Bug Report website, and create a new bug. This is the simplest way to get someone to suggest ideas as to how to possibly fix it, and have developers look at the issue. LuckyLindy left a comment in my previous answer saying that this is the way it has been for 5 years now, well the project is Open Source the best way to fix something that is wrong by reporting it, and or providing patches.
I have also found one mailing list entry: Samba Open files, they suggest adding posix locking=no to the configuration file, as long as you don't also have the files handed out over NFS not locking the file should be okay, that is if the file is being held is locked.
If you wanted too, you could write a program that uses ptrace and attaches to the program, and it goes through and unlocks and closes all the files. However, be aware that this might possibly leave Samba in an unknown state, which can be more dangerous.
The work around that I have already mentioned is to periodically restart samba as a work around. I know it is not a solution but it might work temporarily.
This is probably answered here: How to close a file descriptor from another process in unix systems
At a guess, 'net rpc file close' probably doesn't work because the interprocess communication telling Samba to close the file winds up not being looked at until the file you want to close is done being read.
If there isn't an explicit option in samba, that would be impossible to externally close an open file descriptor with standard unix interfaces.
Generally speaking, you can't meddle with a process file descriptors from the outside. Yet as root you can of course do that as you seen in that phrack article from 1997: http://www.phrack.org/issues.html?issue=51&id=5#article - I wouldn't recommend doing that on a production system though...
The better question in this case would be why? Why do you want to close a file early? What purpose does it ultimately have to close the file? What are you attempting to accomplish?
Samba provides commands for viewing open files and closing them.
To list all open files:
net rpc file -U ADadmin%password
Replace ADadmin and password with the credentials of a Windows AD domain admin. This gives you a file id, username of who's got it open, lock status, and the filename. You'll frequently want to filter the results by piping them through grep.
Once you've found a file you want to close, copy its file id number and use this command:
net rpc file close fileid -U ADadmin%password
I needed to accomplish something like this, so that I could easily unmount devices I happened to be sharing. I wrote this quick bash script:
#!/bin/bash
PIDS_TO_CLOSE=$(smbstatus -L | tail -n-3 | grep "$1" | cut -d' ' -f1 - | sort -u | sed '/^$/$
for PID in $PIDS_TO_CLOSE; do
kill $PID
done
It takes a single argument, the paths to close:
smbclose /media/drive
Any path that matches that argument (by grep) is closed, so you should be pretty specific with it. (Only files open through samba are affected.) Obviously, you need root to close files opened by other users, but it works fine for files you have open. Note that as with any other force closing of a file, data corruption can occur. As long as the files are inactive, it should be fine though.
It's pretty ugly, but for my use-case (closing whole mount points) it works well enough.