I am currently following the steps to set up a Raspberry Pi 3B as a border router on openthread.io (here). NCP and joiner are both KW41Z with corresponding switches set when built. When I try to add the joiner using the "joiner start J01NU5", it returns with "Join Failed [Security]".
I set the passphrase to "J01NU5" in the GUI to match the steps. Is there something else that needs to be set on the joiner (panid, PSK, key, etc.) before starting the joiner process?
Any help would be greatly appreciated!
You must first bring up the joiner network interface with
> ifconfig up
Nothing else is necessary on the joiner before typing
> joiner start J01NU5
You must, however, have a commissioner that has 'started'. For example, with a CLI commissioner device, before attempting to join you should type:
> commissioner start
> commissioner add joiner * J01NU5
I would test this first, without using the Thread app + border router. It is possible that the border router is already the commissioner. In this case, the above commands might fail silently. You can power down your border router and then restart your CLI commissioner device, just to be sure.
Once you know that your joiner is capable of joining a new network (perhaps JOINER=1 was not specified when you compiled your joiner--check this), you can try with your border router as commissioner:
If you are using your border router and the Thread app, you should first get the eui64 on your joiner device, because the Thread app will ask you for it.
There are a number of things that can screw up the process:
Your joiner does not have encryption capability (this is possible with the NXP devices, if you compiled the stock openthread source). You can always attempt to join your network (if you know the network data) without using encryption by typing:
> channel ##
> panid 0x####
> masterkey ################
> ifconfig up
> thread start
Another possible problem: perhaps you configured your border router manually (without using the http://localhost interface), and you did not restart otbr-agent and otbr-www.
I would first try joining by specifying masterkey, panid, channel (directly above). Then try powering device off, waiting for it to be forgotten by the network, and then by using a CLI commissioner. And lastly by using the border router and the Thread app (after powering off, and waiting a while).
Good luck,
David
Related
Working in low bandwidth environments sometimes I need to share a text console (TTY) screen with a co-worker. I often use the screen(1) tool to keep multiple windows open in the same SSH connection and be able to reconnect and continue where I left off, when my connections drop due to bad internet. I wonder if we could not attach multiple SSH sessions / TTYs to the same screen(1) session so that my co-worker(s) and I can both use the same console? It should be possible, but is it?
The answer is as follows:
first screen session is started with
screen ...
then from another console as the same user someone else can attach to the same screen session with
screen -x
And then they can both work, one thing typed by one will be visible by both, etc.
Note: if the screen session has multiple windows, then each attached terminal can switch to a different window independently.
I'm facing issues when communicating with devices over USB hub. When enumerating devices directly to host port, it does work, some devices over usb hub have issues.
Setup: STM32F103C8 - MAX3421E - LUFA (usb stack) (ported to MAX3421E (host) and STM32F103C8T6 (device)) - USB Full-Speed setup
Scenario:
When I attach device directly to host, I don't experience any issues enumerating almost all (some devices seems to be faulty and have weird/nonstandard behavior) devices. But when I try to enumerate over usb hub, devices starts to behave very strangely. I'm receiving much more NAK's from devices than when connected directly to host. Some devices are able to return Device Descriptor, but retrieving Configuration Descriptor fail. Some devices return Toggle Error after several NAK's, this could be remedied so far by delaying retry IN token. Also there is different behavior of devices when connected over different hubs. I.e. one device has no problems when connected to HUB1, but have issues when connected to HUB2. Then I have HUB3 (7 port) which internally acts as HUB in HUB. On this HUB3 device working fine on port behind secondary internal hub, but not on primary ports exposed over "root" hub.
I'm in suspicion that hub's TT could be somehow interfering with usb communication, but according to information I have found, TT should not be enabled under Full-Speed setup.
I have checked (many times) that I'm setting correct device address assigned during SetAddress phase (which is proved by returning Device Descriptor). When I step debug it seems that I can get Configuration Descriptor also, but while in normal system run, it isn't retrieved, but only over hub.
Does anyone has any ideas, what to look after? I've run out of ideas here after week of trying to find a root cause.
Thanks
so...
- as usual after searching for root cause, solution after days of trying comes naturally after asking on somewhere (this is hapenning to me always, but I do try prior asking always)
- when using hubs, make sure you don't suspend SOF generation during control transfers. LUFA just resume bus inside control transfer routines, so make sure you don"t stops and reenable SOF within (my fault as I'm using ported version to MAX)
- if you have tight main loop make sure you don"t reinitialize usb transfer without completion of previous try, but if you do so, check you don't owerwrite data which haven"t been processed yet fully (especially when using interrupt-driven transfer complete processing) [things seems to work when you have quite some debug output, as it delay that time critical transfers]
Enumeration over hub isuues are now second to none. Small glitches are subject for tweaking.
Unfortunately as I was in question for electrical issues, I had to unsolder usb host shield and soldered another one, which in light of new information seems unneeded. Nevermind, I have trained my soldering skills.
I used successfully my raspberry camera times ago.
Now I tried again to acquire a image with the raspistill -o image.jpg command; the red led on the camera flashes, but I get this error:
mmal: No data received from sensor.
Check all connections, including the Sunny one on the camera board
Of course the camera connections are fine. Is there any other way to check if the camera is still working?
This error usually appears because of a faulty connection with the camera.
I had the exact same problem in different camera+Pi configurations. The following case is what I encountered:
The connector is not correctly inserted either in the camera or in the Pi.
The Sunny connector (the small yellow one on the camera) is not connected well.
(now it gets interesting)
If you often remove and reinsert the camera in the Pi please be sure to remove all power from the Pi. The sensor is very sensitive and a spark on the wrong pin could burn it. (I did this already unfortunately)
This could also trigger a problem with the Pi connector pins. It was somewhat confirmed that for Pi2 the connector may have some bad soldering which could lead to cold contact soldering. You can fix this by using some flux on the pins and then pass the soldering iron hot end over the pins to remake the connection.
I used a longer cable that had both connectors on the same side of the cable. If you connect it like it is you can burn your sensor and the Pi will not start because of the power surge (also the camera gets very hot in this case). DO NOT REMOVE IT from the Pi without removing power before. To a cable like this you have to remove the blue plastic from one end and bend the connector with the contacts on the other side. Insert this end in the camera since this will not be removed/inserted as often as the Pi end.
Make sure that the silver contacts are well inserted into the PCB connector.
(video here)
Also, make sure that the sunny connector is firmly attached.
This fixed it for me.
I was experiencing the same problem too, until I found a solution.
I removed the sunny connector(the yellow thingy below the camera in the board) and fixed it in the same place. The camera is working fine after trying this.
I have got the same issue. I have find out it was a power supply issue.
Try to change your cable and/or your raspberry power supply adaptor.
You check if this is your reason of your problem by typing:
$ dmesg
if you see something like this:
[ 44.152029] Under-voltage detected! (0x00050005)
-> Then replace your power supply! :D
The only fix for me was to purchase a new camera.
No real root causes identified.
My problem was I put the camera in, while the Pi was on. which might
have made the camera shot. i.e the camera module is static sensitive
and it's possible that it's been damaged
Unfortunately if this is the case there's nothing you can realistically do to fix it, just get a replacement.
This specific error shows when another application is using the camera. In my case, it was motion.
It might be possible that the cable is placed the wrong way. I had this problem and after multiple tries I realized that was the problem.
So I have been trying to find some sort of information on how GNU screen actually works at a high without actually having to read through the source code but I have been unable to do so.
What does screen do that it is able to stick around even when a terminal session is closed? Does it run as a daemon or something and everyone who invokes screen just connects to it and then it finds which pseudo tty session to attach to or does it do something completely different?
There's a lot of potential questions in this question, so I'm going to concentrate on just one:
What does screen do that it is able to stick around even when a terminal session is closed?
Screen catches HUP signals, so it doesn't automatically exit when its controlling terminal goes away. Instead, when it gets a HUP, it goes into background mode (since it no longer has an actual terminal attached) and waits. When you start screen with various -d/-D/-r/-R/-RR options, it looks for an already running screen process (possibly detached after having received a HUP, and/or possibly detaching it directly by sending it a HUP) and takes over the child terminal sessions of that screen process (a cooperative process whereby the old screen process sends all the master PTYs to the new process for it to manage, before exiting).
I haven't studied Screen itself in depth, but I wrote a program that was inspired by it on the user end and I can describe how mine works:
The project is my terminal emulator which has an emulation core, a gui front end, a terminal front end, and the two screen-like components: attach.d and detachable.d.
https://github.com/adamdruppe/terminal-emulator
attach.d is the front end. It connects to a particular terminal through a unix domain socket and forwards the output from the active screen out to the actual terminal. It also sends messages to the backend process telling it to redraw from scratch and a few other things (and more to come, my thing isn't perfect yet).
https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d
My terminal.d library provides an event loop that translates terminal input and signals. One of them is HUP signals, which are sent when the controlling terminal is closed. When it sees that, the front-end attach process closes, leaving the backend process in place:
https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d#L709
When attach starts up and cannot connect to an existing process, it forks and creates a detachable backend:
https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d#L454
By separating the front end and backend into separate processes, closing one leaves the other intact. Screen does this too: run a ps aux | grep -i screen. The all-caps SCREEN processes are backends. Frontends are lowercase screen processes.
me 3479 0.0 0.0 26564 1416 pts/14 S+ 19:01 0:00 screen
root 3480 0.0 0.0 26716 1528 ? Ss 19:01 0:00 SCREEN
There, I just started screen and you can see the two separate processes. screen forked and made SCREEN, which actually holds the state. Detaching kills process 3479, but leaves process 3480 in place.
The backend of mine is a full blown terminal emulator that maintains all internal state:
https://github.com/adamdruppe/terminal-emulator/blob/master/detachable.d
Here: https://github.com/adamdruppe/terminal-emulator/blob/master/detachable.d#L140 it reads from the socket attach talks to, reads the message, and forwards it to the application as terminal input.
The redraw method, here: https://github.com/adamdruppe/terminal-emulator/blob/master/detachable.d#L318 loops through its internal screen buffer - stored as an array of attributes and characters - and writes them back out to the attached terminal.
You can see both source files aren't terribly long - most the work is done in the terminal emulator core. https://github.com/adamdruppe/terminal-emulator/blob/master/terminalemulator.d and, to a lesser extent, my terminal client library (think: custom ncurses lib) https://github.com/adamdruppe/arsd/blob/master/terminal.d
Attach.d you can see also manages multiple connections, using select to loop through each open socket, and only drawing the active screen: https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d#L300
My program runs a separate backend process for each individual terminal screen. GNU screen uses one process for a whole session, which may have multiple screens. My guess is screen does more work in the backend than I do, but it is basically the same principle: watch for input on each pty and update an internal screen buffer. When you run screen -r to attach to it, it connects through a named pipe - a FIFO in the file system (on my gnu screen config, they are stored in /tmp/screens) instead of the unix socket I used, but same principle - it gets the state dumped to the screen, then carries on forwarding info back and forth.
Anyway, I've been told my source is easier to read than screens' and xterm's, and while not identical, they are similar, so maybe you can get more ideas browsing through them too.
I am using fswebcam to capture and image, when an email is received. I thought that it would be nice to have Motion running as well. I installed Motion, and that worked fine.
However, when I tried to use fswebcam to take a picture,
I received the error:
Error selecting input 0
VIDIOC_S_INPUT: Device or resource busy
then I stopped Motion, and tried it again. It worked. So, I can only have one program accessing the camera at a time.
Is there any way round this?
Use one or the other, two apps can't read the same video camera device at the same time.
Motion is capable of running a script on event detection, so if you want to do that look in the config for on_area_detected or on_movie_start
Then get it to call some kind of shell script that attaches the current photo and emails it to you.
Hope you don't get too many events, else there will be too many emails to find the important ones.
You can use the Motion HTTP Based Control. Simply call:
http://yourraspberrypi:XXXX/0/action/snapshot
using cURL or any other method that you prefer.
Where yourraspberrypi should be the IP of your Pi, and XXXX should be the port defined under 'control_port', in /etc/motion/motion.conf
Note: A symbolic link called lastsnap.jpg created in the target_dir will always point to the latest snapshot, unless snapshot_filename is exactly 'lastsnap'
You can also use the HTTP Based Control, for example, to stop/start motion detection
More info here: http://www.lavrsen.dk/foswiki/bin/view/Motion/MotionHttpAPI
It worked for me after uninstalling motion. To do so, run the following command from the terminal:
sudo apt-get remove motion