What is major difference between MAX3232 and MAX3245 wrt UART? - uart

I am not able to figure out the exact difference.
Please help me in understanding the difference in better ways.

MAX3232 and MAX3245 both are used for RS232 interfacing .The difference is in the Feature
AutoShutdown Plus which MAX3245 has .
To quote from the Data sheet
" Thesedevices automatically enter a low-power shutdown
mode when the RS-232 cable is disconnected or the
transmitters of the connected peripherals are inactive,
and the UART driving the transmitter inputs is inactive
for more than 30 seconds. They turn on again when
they sense a valid transition at any transmitter or receiver
input. AutoShutdown Plus saves power without
changes to the existing BIOS or operating system"
(http://datasheets.maximintegrated.com/en/ds/MAX3224-MAX3245.pdf)
MAX3232 does not have this feature enabled.

Related

USB 2.0 "This device cannot start. (Code 10)"

This is probably a long shot question, but I try it anyway.
I'm developing hardware using PIC Microcontrollers (MicroChip). Communication is done through a FS USB 2.0 link.
I connect the microcontrollers to a Windows 10 Home edition, version 21H1, build 19043.1826. The processor is an AMD Ryzen 5 3600 6-Core Processor.
First I used the PIC18F45K50, for which everything worked fine from day one. But due to the shortages on the market, I now am experimenting with PIC18F47J53. Both microcontrollers are working fine, as I can (for example) control a MAX7219 controlled display (3 x 7-segment) and also control a bunch of LED's using an STP08CP05TTR. Clock timings seem also ok - I measured it with an oscilloscope.
These 2 microcontrollers are pretty much the same, at least for the core functionality such as USB. The differences that are relevant for the issue I'm reporting here are:
PIC18F45K50 uses internal clock of 8MHz, and has on board correction logic to keep clock synced for HS USB - this is a 5V processor
PIC18F47J53 uses a XTAL of 16MHz, all should be within the USB 2.0 specs - this is a 3.3V processor
I'm using the MPLab X IDE v5.45 with the MCC (MPLab Code Configurator) in which I setup the System Module (to set the correct clock frequencies including the 48MHz for USB) and where I configure the USB.
In both microcontrollers, the setup of the USB is exactly the same. I even checked the 4 files that are automatically generated by MCC, and except for the descriptors (I used different names), all is exactly the same.
When I connect the USB to my PC (same port), then the PIC18F45K50 works perfect. But the PIC18F47J53 gives error code 10.
This does not happen every time. For example, if I try 10 times (connect/disconnect the cable), then I had it 7 times. 1 time the device even didn't appear, and 2 other times I read "The device is working properly.". Although, in the latter case, my software that communicates with my controller isn't working, so there is still something wrong.
Based on the above, the first I would think of is some hardware issue. Although, the strange thing is that things like vendor ID (0x4D8), Product ID (0xA), BCD Device Release (0x100), Serial Number (12345678), etc... seem always to be read out correctly. If there would be a hardware problem, shouldn't I have more random issues with this as well? Or is this data read out in a slower mode than Full Speed (because that could of course explain this)?
Below are screenshots via "Device Manager / Ports (COM & LPT) / my serial device", then selecting the property in the Details.
If I compare the properties from the working microcontroller (PIC18F45K50) with the not working one (PIC18F47J53), it looks like all are exactly the same.
I also tried to compare the D- (CH1) and D+ (CH2) signals between the 2 microcontrollers with my oscilloscope. My USB knowledge is not detailed enough to interpret the signals, but what I can tell is that both look exactly the same to me, both timing wise and voltage level wise. Be aware that the CH2 signal on the PIC18F47J53 (D+), the second screenshot, is clipping in the picture below, but I measured it later and it shows the same voltage level as for the PIC18F45K50.
Does anybody here a single clue where I should look at in the first place? The good news is that I have a working and not working version, so I can start debugging step by step and compare. But some hints as where to start would be appreciated.
EDIT 24JUL2022
I did the measurement with my oscilloscope again. Now I soldered 2 wires to the USB port to be able to easily attach my probes. This time, both D- and D+ signals have a Vpp of about 3.3V. I put some cursors which also shows a pulse-width of about 84ns, which correlates with the USB HS frequency of 12MHz (should be 83.33ns).
I found the issue. The Vusb on my PIC18F47J53 had a bad (or was even not) connected. I gave it another touch of my soldering iron, and bingo! Now the "error 10" has disappeared completely, and each time I connect/disconnect it gives "This device is working properly.", and error 10 never appears. I now also see a continues signal on my oscilloscope - not one that is disappearing after a while. And I could send/receive already some commands.

ST-LINK could not connect to the target

I'm trying to connect to stm32f401rbt6 with st-link utility.
The MCU has 6 pins connected, as on the image below.
The target is powered by a lab power supply, target GND is connected to the ST-Link GND
When I plug it to the computer, st-link utility says it can't connect.
Tried:
Update ST-Link firmware
Connect under reset is by default, tried all available methods
Checked connectivity for the pins on the image
Connected with the same ST-Link to other MCU
Desoldered the MCU and soldered another one
The issue is still remain. Please suggest what I'm doing wrong, or how to check that my MCU is alive.
I once had similiar issues and i figuered out, that decoupling capacitors were vital. After soldering this onto the pcb, it worked like a charm.
(Similar question: Stm32CubeProgrammer not connecting (no error msg) using ST-LINK V2 dongle and Lora E5 mini board)
You can try the following suggestions. Some ST devices are a lot more sensitive than others when it comes to programming. I have had some ST devices programming without issues and then using practically the same setup on other devices it just won't work.
Place a 22ohm resistor in series on the SWDIO and SWCLK lines. This link suggests only placing it on the SWDIO line but I found that I needed it on the SWCLK line as well. Typical SWD Circuit
For the ST Link Settings try using these:
Reduce the frequency from 4MHz to a lower frequency
Use SWD
Use connect under reset
Don't use an external pull-up on the NRST line.
Make sure that your programming wires between the ST-LINK and the target board are as short as is conveniently possible.
(This one I must stress as being important) Make sure that your processor's ground pins are all connected very closely together (i.e. the tracks between them are as short as possible) and that very importantly your programmer ground is also connected to the same ground pins very closely.- At high programming speeds a thin or long unbalanced (different length) ground track to the processor can cause a problem with some devices.
Whatever you are using to supply power to the processor must have a supply with a similar voltage as the ST-LINK (mine is 3V) - (although I have found that if the processor supply is 3.3V programming seems to still work most of the time.) (Remember the original ST-Link does not supply power only reads the power level.)
A dodgy programming setup can accidentally set the protection to LEVEL 2 bricking your device - so if you have been trying and not getting any further, it might be time to replace your IC.
Prior to changing / erasing a device that had been programmed to LEVEL 1, you might need to first enable the PCROP_RDP option byte. - Once enabled, you should be able to change from LEVEL 1 to LEVEL 0 that will automatically erase the device.
Some people have suggested holding the device in reset until just after pressing the erase button to enable erasing it.
I hope these suggestions help...

Control stepper motors via USB

I'm doing a USB device is to control stepper motors. I've done this before using a parallel port. because these ports do not exist in current motherboards, I decided to implement a USB communication between my device and the PC (host).
To achieve My objective, I endowed the freescale microcontroller the device with that has a USB module 12Mbps.
My USB device must receive 4 bytes (one for each motor driver) at a given time, because every byte is a step that should move the engine.
In the PC (Host) an application of user processes a text file with information and make the trajectory coordinates sending bytes at a certain rate for each motor (time is trivial to achieve the acceleration and speed of the motors) .
Using the parallel port was an easy the task because each byte is sent sequentially to a time determined by the user app.
doing a little research about full speed USB protocol understood that the frame is sent every 1ms.
then you can send 4 byte or many more every 1ms but I can not manage time like I did with the parallel port.
My microcontroller can send up to 64 bytes per frame (Based on transfer papers type Control, Bulk, Int, Iso ..).
question 1:
I want to know in what way I can send 4-byte packets faster than every 1 ms?
question 2:
What type of transfer can advise me for these type of devices?
Thanks.
Like Ricardo said, USB-serial will suffice.
As for the type of transfer, try implementing a CDC stack and use your SCI receiver to listen for PC commands. That will give you a receive buffer which will meet your needs.
Initialize your SCI (baud, etc)
Enable receiver and interrupt
On data receive, move it to your 4-byte command buffer
Clear receive buffer, wait for more
When you have all 4 bytes, fire off the steppers! Four bytes should take µs.
Check with Freescale to see if your processor is supported.
http://cache.freescale.com/files/microcontrollers/doc/support_info/USB_STACK_RELEASE_NOTES_V4.1.1.pdf?fpsp=1
There might even be some sample code to get you started.
-Cheers
I am achieving the same goal (driving/control CNC machines) like this:
the USB device is just synchronous I/O parallel port. Using continuous bulk transfer one pipe as input and one as output. This way I was able to achieve synchronous 64bit parallel communication with ~70KHz sample rate. It uses traffic around (i)4.27+(o)4.27 MBit/s that is limit for mine MCU and code. Bigger speeds cause jitter on the output due to USB events interrupts.
How to do it (on MCU side)
I have 2 FIFO's one for ingoing and one for outgoing data. I have timer interrupt occurring with sample rate frequency. In it I read the inputs and feed it to the first FIFO and read data from the other FIFO and send it to the outputs.
On top of that the USB task is called (inside the same interrupt) checking FIFO for sending to and incoming data from USB handling the transfer itself
I choose ATMEL AT32UC3A chips for this task. After a long and pain full research I decided these MCU's because they have enough memory for both FIFO's and program so no need for additional IC. It has FPGA package which can be used (BGA is not an option). It has HS USB (most USB MCU's have only FS like yours). It runs at 66MHz. It supports many interesting features (did interesting projects with it in the past) and of coarse I have experience with ATMEL MCU's from past
So if you want to achieve something similar then
start with bulk transfer (PC -> USB -> MCU -> output)
add FIFO if needed
do not know the sample rate you need. The old LPT's could handle from 80-196KHz depend on the manufactor. The modern ones are much much slower (which is silly and sad).
measure the critical sample rate
you need oscilloscope or very good hearing for this. The output data must be synchronous so no holes in it, no jitter, etc...
if any of these are present you have to lower the sample rate. Mine setup could handle even 1MHz sample rate but the USB jitter was present (sometimes USB event froze the sending for longer that one sample...) so I achieve only 70KHz of stable output.
if needed also inputs then add them
but only if the output is working as it should. Do not forget to lower the sample rate after this too ... Use separate bulk pipes and FIFOs for input and output.

How CPU finds ISR and distinguishes between devices

I should first share all what I know - and that is complete chaos. There are several different questions on the topic, so please don't get irritated :).
1) To find an ISR, CPU is provided with a interrupt number. In x86 machines (286/386 and above) there is a IVT with ISRs in it; each entry of 4 bytes in size. So we need to multiply interrupt number by 4 to find the ISR. So first bunch of questions is - I am completely confused in mechanism of CPU receiving the interrupt. To raise an interrupt, firstly device shall probe for IRQ - then what ? The interrupt number travels "on IRQ" towards CPU? I also read something like device putting ISR address on data bus ; whats that then ? What is the concept of devices overriding the ISR. Can somebody tell me few example devices where CPU polls for interrupts? And where does it finds ISR for them ?
2) If two devices share an IRQ (which is very much possible), how does CPU differs amongst them ? What if both devices raise an interrupt of same priority simultaneously. I got to know there will be masking of same type and low priority interrupts - but how this communication happens between CPU and device controller? I studied the role of PIC and APIC for this problem, but could not understand.
Thanks for reading.
Thank you very much for answering.
CPUs don't poll for interrupts, at least not in a software sense. With respect to software, interrupts are asynchronous events.
What happens is that hardware within the CPU recognizes the interrupt request, which is an electrical input on an interrupt line, and in response, sets aside the normal execution of events to respond to the interrupt. In most modern CPUs, what happens next is determined by a hardware handshake particular to the type of CPU, but most of them receive a number of some kind from the interrupting device. That number can be 8 bits or 32 or whatever, depending on the design of the CPU. The CPU then uses this interrupt number to index into the interrupt vector table, to find an address to begin execution of the interrupt service routine. Once that address is determined, (and the current execution context is safely saved to the stack) the CPU begins executing the ISR.
When two devices share an interrupt request line, they can cause different ISRs to run by returning a different interrupt number during that handshaking process. If you have enough vector numbers available, each interrupting device can use its own interrupt vector.
But two devices can even share an interrupt request line and an interrupt vector, provided that the shared ISR is clever enough to go back to all the possible sources of the given interrupt, and check status registers to see which device requested service.
A little more detail
Suppose you have a system composed of a CPU, and interrupt controller, and an interrupting device. In the old days, these would have been separate physical devices but now all three might even reside in the same chip, but all the signals are still there inside the ceramic case. I'm going to use a powerPC (PPC) CPU with an integrated interrupt controller, connected to a device on a PCI bus, as an example that should serve nicely.
Let's say the device is a serial port that's transmitting some data. A typical serial port driver will load bunch of data into the device's FIFO, and the CPU can do regular work while the device does its thing. Typically these devices can be configured to generate an interrupt request when the device is running low on data to transmit, so that the device driver can come back and feed more into it.
The hardware logic in the device will expect a PCI bus interrupt acknowledge, at which point, a couple of things can happen. Some devices use 'autovectoring', which means that they rely on the interrupt controller to see to it that the correct service routine gets selected. Others will have a register, which the device driver will pre-program, that contains an interrupt vector that the device will place on the data bus in response to the interrupt acknowledge, for the interrupt controller to pick up.
A PCI bus has only four interrupt request lines, so our serial device will have to assert one of those. (It doesn't matter which at the moment, it's usually somewhat slot dependent..) Next in line is the interrupt controller (e.g. PIC/APIC), that will decide whether to acknowledge the interrupt based on mask bits that have been set in its own registers. Assuming it acknowledges the interrupt, it either then obtains the vector from the interrupting device (via the data bus lines), or may if so programmed use a 'canned' value provided by the APIC's own device driver. So far, the CPU has been blissfully unaware of all these goings-on, but that's about to change.
Now it's time for the interrupt controller to get the attention of the CPU core. The CPU will have its own interrupt mask bit(s) that may cause it to just ignore the request from the PIC. Assuming that the CPU is ready to take interrupts, it's now time for the real action to start. The current instruction usually has to be retired before the ISR can begin, so with pipelined processors this is a little complicated, but suffice it to say that at some point in the instruction stream, the processor context is saved off to the stack and the hardware-determined ISR takes over.
Some CPU cores have multiple request lines, and can start the process of narrowing down which ISR runs via hardware logic that jumps the CPU instruction pointer to one of a handful of top level handlers. The old 68K, and possibly others did it that way. The powerPC (and I believe, the x86) have a single interrupt request input. The x86 itself behaves a bit like a PIC, and can obtain a vector from the external PIC(s), but the powerPC just jumps to a fixed address, 0x00000500.
In the PPC, the code at 0x0500 is probably just going to immediately jump out to somewhere in memory where there's room enough for some serious decision making code, but it's still the interrupt service routine. That routine will first go to the PIC and obtain the vector, and also ask the PIC to stop asserting the interrupt request into the CPU core. Once the vector is known, the top level ISR can case out to a more specific handler that will service all the devices known to be using that vector. The vector specific handler then walks down the list of devices assigned to that vector, checking interrupt status bits in those devices, to see which ones need service.
When a device, like the hypothetical serial port, is found wanting service, the ISR for that device takes appropriate actions, for example, loading the next FIFO's worth of data out of an operating system buffer into the port's transmit FIFO. Some devices will automatically drop their interrupt request in response to being accessed, for example, writing a byte into the transmit FIFO might cause the serial port device to de-assert the request line. Other devices will require a special control register bit to be toggled, set, cleared, what-have-you, in order to drop the request. There are zillions of different I/O devices and no two of them ever seem to do it the same way, so it's hard to generalize, but that's usually the way of it.
Now, obviously there's more to say - what about interrupt priorities? what happens in a multi-core processor? What about nested interrupt controllers? But I've burned enough space on the server. Hope any of this helps.
I Came over this Question like after 3 years.. Hope I Can help ;)
The Intel 8259A or simply the "PIC" has 8 pins ,IRQ0-IRQ7, every pin connects to a single device..
Lets suppose that u pressed a button on the keyboard.. the voltage of the IRQ1 pin, which is connected to the KBD, is High.. so after the CPU gets interrupted, acknowledge the Interrupt bla bla bla... the PIC does simply add 8 to the number of the IRQ line so IRQ1 means 1+8 which means 9
SO the CPU sets its CS and IP on the 9th entry in the vector table.. and because the IVT is an array of longs it just multiply the number of cells by 4 ;)
CPU.CS=IVT[9].CS
CPU.IP=IVT[9].IP
the ESR deals with the device through the I/O ports ;)
Sorry for my bad english .. am an Arab though :)

Is there software or code to alter USB power output

I had a look at this and this but no one sounded particularly sure of their ideas and I'm kind of after a different thing anyway. I want to hook my usb power cables (red and black) up to my phone so I don't have to use a battery (the battery is dead anyway and this is just an experiment). The problem is that USB standards ensure that a minimum of 4.35V is supplied, when I only want 3.7V. Does anyone know for sure that you can or cannot regulate power output programmatically? Some other queries I have are: What kind of power does the sleep mode provide? And what would I need to code something in to play with this, C++?
No, you won't find a computer that allows you to set this voltage in software. It would break the USB specification.
You can get 150mA by default, and 500mA if your USB device negotiates it with the computer (requiring a little bit of logic in the device). Multiply by 5V to get the provided power.
A bit more info on the answer from Pascal:
The normal operation (Non-Configured mode) is 100mA
In theory, the operating system should check the MaxPower value of the device's configuration descriptor to decide if to allow it to draw more than 100mA.
In practice, PCs do not do it (and have no way to control it). So you can try taking 500mA.
(Of course connecting a bus powered hub and linking more then one 500mA device, should, not work.)
If the device is not actively used, the OS may (and should) suspend it. When suspended the power is limited to 1-0.5mA (Again, in theory, since it can not be controlled by software).