I make this post because I didn't find the answer (I made some searches with tag Shutdown, Reboot and SSH).
Since fews months, I've got 2 Corals (out of 3) that shutdown unexplicaly and then become unreachable.
When this occur, I'm forced to go the device (the serial connectivity doesn't work either and the fan doesn't works at this time), unplug and then re-plug the power in order to be able to reconnect through SSH.
What the best thing to do ?
Thanks a lot.
I've seen similar behavior with low power. Be sure you are connecting your board to a 2-3 A power supply. A couple indications that this might be your problem are:
Your board is connected to your computer USB power
Your board runs fine until you load it (i.e. start inferencing on the TPU)
That's the first things I would look for.
Related
I am trying to connect to an MSTR intelligent server in Seattle from MSTR Developer running on my laptop connected in Bangalore. It takes an average of 10+ seconds for any action I do on the developer, like, login or open folders or open a report or anything. It is almost impractical to do any report development this way (not to mention the frustration).
When my colleague connects to the same instance/project from Seattle he doesn’t face any delays. So I figure that this is a network issue and doesn’t have much to do with the metadata or indexes. The network response time to the box is 30ms and 300ms average from Seattle and Bangalore respectively. I found online that 280ms is average response time from India to US. Accessing the reports and projects via the web interface is smooth though.
Have you ever experienced a situation like this before? Can the network delays cause that much trouble on MicroStrategy? Please help…
PS: This question is not quite a fit for SO. But I guess that MSTR
developers face this problem normally and may be they know a fix.
Hence posting this here rather than SU or somewhere else.
This is a pretty common problem, in my experience. I believe that MicroStrategy's network traffic is XML based, so network bandwidth as well as latency is an issue.
Usually, the web server is more responsive because:
It is performing "simpler" tasks that Developer
The network-intensive traffic is between I-Server and web server, so if they're colocated, performance will be reasonable.
I'm afraid I've never come across an effective solution to this issue. Having a "jump server" in the same data centre as the MSTR servers, with the Developer software installed, is usually the most tolerable solution (provided Remote Desktop isn't too laggy).
Same solution here : we have developpers VMs on a host in the same datacenter as the server, and we remote desktop them. From there, we use Developper/object manager, etc
You can still do 90% of the tasks in web.
this may be a simply question but I hope its not.
I run some very long winded SQL operations on my local PC, which is hard wired to an ADSL modem for internet. The SQL Server and databases are ALL local on my PC, and the processing seems fine. (as fast as normal).
However, if my internet drops out - which happens perhaps a few times a night (at later hours usually), my SQL connection also drops with the familiar Connection lost error. (An error one would get if connecting to SQL over a network)
For me this does not make sense, my SQL connection string refers to the local instance only and no processing is over a network of any kind (I have VPN sometimes active, but not always when this happens)
I can run the same SQL processing without the modem connected with no issues at all. (Although sometimes it can take many many hours so I prefer to have the modem connected)
Could this possibly be due to the extra SQL Services ie Browser - that is somehow affected by the modem losing its internet connection?
(I would like to know that my ISP-provided modem is not doing any funny business in the background - like examining my data / traffic / etc)
Any help appreciated
Try this connection string:
Provider=SQLOLEDB.1;Integrated Security=SSPI;Initial Catalog=DATABASE_I_USE;Data Source=127.0.0.1\INSTANCENAME
This should force the traffic onto the loopback adapter, which will allow it to avoid cases where your physical card's network link may drop.
If that does not work, try explicitly installing a loopback adapter.
Other than the possible lag issues, has anyone tried this? What are the pros or cons associated with this?
A lot of times for me it's the limitations of the remote desktop connection, be it VNC or RDP or whatever. For examples:
My workstation has two monitors. Remotely viewing my workstation reduces it to one.
Lag is tolerable in the IDE, but not with anything image-heavy. Everything from photoshopping to web browsing is done locally, not on the remote machine.
Adding to #2, when splitting up tasks between the local and remote machine, there's that extra layer of getting the two to play nice together that adds just a little bit of overhead per task, which adds up to a lot overall. Something as simple as saving a file from the web browser and opening it in the IDE takes more steps.
(I may think of more and add them later.)
All in all, it's fine if the setup can be adjusted properly. In my experience, the companies I've worked for have defined their remote connection capabilities by the needs of someone other than the software developers, and thus leave us with little pet peeves that make the process just slightly more difficult than it needs to be.
Here is my take on it from my experiences
PROS: Single dev environment, only need to license one set of tools (if applicable)
CONS: The lag got the best of me. Typing to only have it show up 1 - 3 seconds later...sometimes, other times works great. In VS, the popup notifications sometimes take forever to display as well. Other cons would include if you have to share your desktop with another employee and possible moving files to/from the dev machine as RDP does not natively allow you to drag/drop files.
same as other posters - lag when using tools that affect screen painting for vstudio (resharper,coderush) is a real problem - some stuff involving the mouse (dragging grid columns) is very difficult to use
I'd add that about every 10-15 times when I go to log back in on the physical workstation at work, it takes the stupid thing about 2 minutes to finally succeed in refreshing the displays
One of our current milestones on our (open source) project at the moment is to complete USB support, and as such we're working hard on drivers at the moment. Our current development focuses on EHCI on both x86 and ARM (OMAP35xx SoC specifically, EHCI-only in the silicon of the board). We have mostly everything running smoothly in a variety of emulators - VMware (free and non-free versions), QEMU, and VirtualBox.
When we do testing on real hardware however, we get absolutely nowhere. The basic routine for device enumeration in our system goes something like this:
Turn on port power (if the option is available) and wait for power to stabilise to the device
Perform a port reset (held for 50 ms) and then wait as long as needed for the reset to complete (while loop)
If the port has a device present, and is enabled, notify the system that a new USB device is available for initialisation.
Send the SET ADDRESS command to assign an address to the device. This is where we run into problems everywhere:
The SETUP transaction for this command completes without error
The zero-length IN transaction (status phase) throws a transaction error, halts the qTD, and disables the port.
Our timing delays are basically the same as Linux's driver (if anything, longer).
According to the USB 2.0 specification, this behaviour is a "Port Error" (section 11.8) but to be completely honest I don't see how to translate its description of a port error into a working solution for our driver. As we are an open source project we also don't have the money to go out and purchase a proper hardware USB protocol analyser to investigate exactly what's going on on the line either.
Has anyone faced a similar problem and knows a solution?
We have identified the cause of this problem has been a timing issue, but in our case the issue was too much of a delay.
By modifying our qTD/QH creation code to create a single QH with multiple linked qTDs associated with it, we've been able to get successful runs on physical hardware.
We also had to use te EHCI 64-bit data structures, which had not been implemented previously.
I have 5 servers on a LAN without Internet connection. I need them to keep the clock in sync among them.
I could configure them as NTP peers, and set a high stratum for the local clock of one of them. In this way, the other four would sync with that clock.
What I actually want, is them to agree on a time using all of the 5 local clocks (i.e. doing some kind of average), for reasons of robustness and precision. Is it possible with NTP?
PS: I do not want to use an external clock source.
EDIT: and no scripting outside NTP features, that could only make precision worse :)
If you average 5 drifting clocks, the only thing you get is another drifting clock that's harder to correct. It won't be more precise. NTP uses multiple servers to increase precision because it takes network latency into account. Since all your systems are on a fast local network, you just need one server.
Set up two systems to be NTP server, one a primary, and if you feel the need, one a backup. Have all other systems synchronize to them. This will be significantly easier to set up than the clock-averaging solution, and you won't have to develop any crazy scripts.
You might be able to have one of them listen for the times from each computer, perform an average, set the average as it's own time, and broadcast that time for all the other computers. It seems a little excessive, though.
you can set up one of them as ntp server which will broadcast its time on the local network and the others as slaves to listen on the local network
edit:
I missed the average part. well, in that case, you can probably write a script on the local server to collect times from all the slaves get the average and update own time with that value.
You may even want to get rid of ntp in that case and just use the script to update time on all the servers
I wish I could give a definitive proposal, but I don't know enough about your environment. No matter what you'll likely be doing some sort of script kung fu.
If it's unix/linux I would set everyone up with SSH authorized keys to poll each others' date +%s command (to get the epoch), average those times with awk or something, and then set the machine's own local date.
Or perhaps it would be more secure (and reliable) to have one authoritative machine check everyone's time in the same manor, average it, and then provision itself and every other host to that average.
On Windows you'll probably be looking into VBScript and WMI.
EDIT:
You may run into some weird problems if anyone's clock drifts forward from the average and my guess is about half of them will ;). Future timestamps can be rather strange. It will be up to you to determine how frequently this synchronization will need to occur.