Dfu-util running 10 times slower on one PC than on other - embedded

I am using Dfu-util to flash firmware onto an NXP device. It all works fine on my Windows 7 64 bit desktop, but on my Dell Inspiron 6400 laptop, running W10 32 bit(rather well, as it happens), the firmware download takes about ten times as long. Any pointers or suggestions would be much appreciated.

Answering here two years later because I found a possible solution, at least applicable in my case.
Connecting the board directly to the PC resulted in extremely slow DFU speed. However, adding a simple hub (4-port USB 2.0, unpowered) in between immediately made DFU download much faster.
It may be possible that the difference between the two computers was due to one using an internal hub or different USB topology that had the same effect.
As to why this helped, I have no clue, but it did and is perfectly reproducible in my case.

Related

USB 2.0 "This device cannot start. (Code 10)"

This is probably a long shot question, but I try it anyway.
I'm developing hardware using PIC Microcontrollers (MicroChip). Communication is done through a FS USB 2.0 link.
I connect the microcontrollers to a Windows 10 Home edition, version 21H1, build 19043.1826. The processor is an AMD Ryzen 5 3600 6-Core Processor.
First I used the PIC18F45K50, for which everything worked fine from day one. But due to the shortages on the market, I now am experimenting with PIC18F47J53. Both microcontrollers are working fine, as I can (for example) control a MAX7219 controlled display (3 x 7-segment) and also control a bunch of LED's using an STP08CP05TTR. Clock timings seem also ok - I measured it with an oscilloscope.
These 2 microcontrollers are pretty much the same, at least for the core functionality such as USB. The differences that are relevant for the issue I'm reporting here are:
PIC18F45K50 uses internal clock of 8MHz, and has on board correction logic to keep clock synced for HS USB - this is a 5V processor
PIC18F47J53 uses a XTAL of 16MHz, all should be within the USB 2.0 specs - this is a 3.3V processor
I'm using the MPLab X IDE v5.45 with the MCC (MPLab Code Configurator) in which I setup the System Module (to set the correct clock frequencies including the 48MHz for USB) and where I configure the USB.
In both microcontrollers, the setup of the USB is exactly the same. I even checked the 4 files that are automatically generated by MCC, and except for the descriptors (I used different names), all is exactly the same.
When I connect the USB to my PC (same port), then the PIC18F45K50 works perfect. But the PIC18F47J53 gives error code 10.
This does not happen every time. For example, if I try 10 times (connect/disconnect the cable), then I had it 7 times. 1 time the device even didn't appear, and 2 other times I read "The device is working properly.". Although, in the latter case, my software that communicates with my controller isn't working, so there is still something wrong.
Based on the above, the first I would think of is some hardware issue. Although, the strange thing is that things like vendor ID (0x4D8), Product ID (0xA), BCD Device Release (0x100), Serial Number (12345678), etc... seem always to be read out correctly. If there would be a hardware problem, shouldn't I have more random issues with this as well? Or is this data read out in a slower mode than Full Speed (because that could of course explain this)?
Below are screenshots via "Device Manager / Ports (COM & LPT) / my serial device", then selecting the property in the Details.
If I compare the properties from the working microcontroller (PIC18F45K50) with the not working one (PIC18F47J53), it looks like all are exactly the same.
I also tried to compare the D- (CH1) and D+ (CH2) signals between the 2 microcontrollers with my oscilloscope. My USB knowledge is not detailed enough to interpret the signals, but what I can tell is that both look exactly the same to me, both timing wise and voltage level wise. Be aware that the CH2 signal on the PIC18F47J53 (D+), the second screenshot, is clipping in the picture below, but I measured it later and it shows the same voltage level as for the PIC18F45K50.
Does anybody here a single clue where I should look at in the first place? The good news is that I have a working and not working version, so I can start debugging step by step and compare. But some hints as where to start would be appreciated.
EDIT 24JUL2022
I did the measurement with my oscilloscope again. Now I soldered 2 wires to the USB port to be able to easily attach my probes. This time, both D- and D+ signals have a Vpp of about 3.3V. I put some cursors which also shows a pulse-width of about 84ns, which correlates with the USB HS frequency of 12MHz (should be 83.33ns).
I found the issue. The Vusb on my PIC18F47J53 had a bad (or was even not) connected. I gave it another touch of my soldering iron, and bingo! Now the "error 10" has disappeared completely, and each time I connect/disconnect it gives "This device is working properly.", and error 10 never appears. I now also see a continues signal on my oscilloscope - not one that is disappearing after a while. And I could send/receive already some commands.

Weird graphical glitch after driver "crash" for AMD Radeon 5700 XT

first time here!
I've stumbled across a very weird visual glitch, consisting of big white "pixels" of white shown in almost perfect repetitive patterns, occurring after my graphics drivers crashed again. A friend of mine considered the way its displayed as too perfect for how it is presented. Photo of Glitch
My GPU has been acting this way for a long while already (I bought it new, no mining or extreme stress prior), initially on Windows 10 where I tried many steps from driver updates and clean re-installs to re-installing whole system.
Now I use Arch Linux on my friends recommendation but the error still occurs, although as of yet with lesser frequency. Just this time it had a distinct look to it, maybe because I haven't been booted to a BSoD?
The question I have now is if this is a case of GPU being broken, or if it is maybe some other hardware component acting up.
Might it be vbios issue? Vram? Maybe mobo? PCI slot was cleaned up by me on many times, temperatures don't seem to be an issue.
My PC specs:
CPU: AMD Ryzen 7 3700X
GPU: AMD Radeon 5700 XT 50th Aniv. Edition
RAM: 2x16GB DDR4
MB: MSI B450 Gaming Plus
PSU: Corsair VS650 650 Watt
Thank you in advance for any assistance!

Computer crashes and shows HDMI no signal, must be unplugged to restart

When playing certain games or viewing certain websites, my computer will suddenly crash and my monitor will display "HDMI no signal" the computer cannot be restarted without unplugging it from the wall. Upon viewing the crash report I see event 10016 related to permissions I think, but I'm a moron. Any and all solutions are greatly appreciated. Relevant components are as follows:
Graphics Card: RTX 2080
Power supply: EVGA supernova 1000g2
Storage: Sandisk 500Gb
CPU: Ryzen 2700X
Monitor: Both HP EliteDisplay E222 and another HP monitor
Since you are not supplying your q with the crash report, I can only suspect your problem is rooted to either one of these:
Bug in the accompanying display driver and/or directX installation
Proposed solution : try and obtain the latest version of your RTX 2080, do a 2D and 3D test run afterwards to ensure everythings proper
Fan or cooling related issue. Some games might force your hardwares to work harder, especially over continuous use. Check your fan and coolings to ensure they are moving and cooling as fast as they should. Also install a temp monitoring software if you need to be extra sure.
Hope those help m8

Computer restarts with large mini batches in TensorFlow

I am running TensorFlow for Windows with a Titan X GPU (12 GB memory). When I try to train a network for images of 256X256X1 with mini-batches larger than 50 images, my computer just crashes and restarts automatically. With smaller mini-batches it runs just fine.
Any clues on what might be causing this?
I've seen similar problems being discussed in some gaming forums, where the PC would just shut down when the GPU was under heavy load. The reason was usually that the GPU was drawing more power than the power supply unit could handle. Check e.g. here or here. So may be it's worth investigating whether your PSU is the culprit.
Edit: May be the program SpeedFan can help you debugging this - it is able to show both voltages and readings of temperature sensors, which would also tell you if your PC is overheating (I've never used the tool myself, and I'm not affiliated with it either, just found it online).

Basic virtualization questions

Excuse me for my lack of knowledge but I am really new to the Virtual world and have a few questions.
I work for a small charity who specialise in providing basic IT training. We have recently acquired a few Dell Poweredge 2650 servers and Dell desktops and we wish to offer both XP, Windows 7, Mac and Ubuntu training. I am looking at setting up a Virtual environment so that we can have a standard image for each OS (I currently use image files but it currently takes approximately 25mins to build each machine and multi-boot is not an option as the new machines have 20Gb disks).
The servers are all dual processor and we can purchase more memory(I need to justify the cost)
What are the memory requirements for
the Host?
How many VM's can I run
per server?
Can I run multiple instances of the same VM
Thanks in advance for your knowledge.
Darryn
You might be able to get away with a multi-boot option with those 20 gig disks; each OS will probably take no more than ten gigs for minimal installs, two OSes per machine isn't terrible. (Incidentally, look around for a group like FreeGeek in your area -- larger hard drives ought to be cheap for small sizes like 120-500 gigs.)
That said, virtualization might be just what you need, if you have a handful of pretty powerful machines.
I think between one and two gigabytes of host memory for every guest VM that you want to run would be very useful. At least in my experience, an Ubuntu image I gave 1024 megabytes to ran very quickly, but I didn't press it very far. Running Firefox or OpenOffice inside the VM would probably dictate more memory very quickly. Chrome seemed snappy.
So, if you've got 12 gigabytes of RAM, you might be able to get between four and twenty virtual machines hosted on the machine simultaneously, depending upon what your guests are doing.
As for disk space, if you use QEMU's -snapshot option, you ought to be able to save disk space. Each user could boot the same underlying disk image, but their own modifications would go into the 'snapshot' file. (I have no experience trying to do long-term system maintenance with this option, so it could be that all twenty of your users need to store service pack 2 contents when they upgrade in the future; I'd be scared of trying to modify the shared disk image once you've got snapshots of it running. Perhaps having everyone store 'personal documents' and the like in CIFS shares would make a ton of sense.)
The biggest hurdle will probably be Mac; because the Apple terms of service forbid running OS X on non-Apple hardware, you'll have to have some Apple machines around to run VirtualBox.