I have a computer and two monitors (with different sizes and resolutions) and I want to connect them to the PC via a DVI Splitter.
My question is: how do you define the "signal" EDID in this setup?
The splitter has a main output (which sends the EDID) and secondary (that fits)? Or EDID signal is taken by monitor with lower performance?
Related
I am a bit confused on how I/O devices like keyboards store their input for use by the operating system or an application. If I have a computer with a single processor (a CPU with a single core), and the current executing process is a game, how is the game able to be "aware" of keyboard input? Even if a key press were to force a hardware interrupt (and thus context switch), and then "feed" the key value to the game whenever the OS gives control back to the game process, there's no guarantee that the game loop would even be checking for player input at that time, it could just as well be updating game object positions or rendering the game world.
So my question is this...
Do I/O devices like keyboards store there input to some kind of on-board hardware specific microprocessors or on-board memory storage buffer queues, which can later be "read" and flushed by external processes or the OS itself? Any insight is greatly appreciated, thanks!
Do I/O devices like keyboards store there input to some kind of on-board hardware specific microprocessors or on-board memory storage buffer queues, which can later be "read" and flushed by external processes or the OS itself?
Let's split it into 3 parts..
The Device Specific Part
For old keyboards (before USB); a microcontroller in the keyboard regularly scans a grid of switches and detects when a key is pressed or released, converts that into a code (which could be multiple bytes), then sends the code on byte at a time to the computer. The computer also has a little microcontroller to receive these bytes. That microcontroller has a 1 byte buffer (not even big enough for multi-byte codes).
For newer keyboards (USB); the keyboard's internals a mostly the same (microcontroller scanning a grid of switches, etc); but USB controller asks the keyboard "Anything happen?" regularly (typically every 8 milliseconds) and the keyboard's microcontroller replies.
In any case; the keyboard driver gets the code that came from the keyboard and processes it; typically converting it into a fixed length "key code", merging it with other data (if shift or capslock or... was active at the time; if there's unicode codepoint/s that make sense for the key, etc) and bundles all that into a data structure.
The OS Specific Part
That data structure (created by the keyboard driver) is typically standardized by the OS as a "user input event" (so, same "event" data structure for keyboard, mouse, touchscreen, joystick, ...).
That "user input event" is sent from driver via. some form of inter-process communication (pipes, messages, ...) to something else (e.g. GUI). This inter-process communication has 2 common behaviors - if the receiving program is blocked waiting to receive an event then the scheduler unblocks it (cancels the waiting) and schedules it to get CPU time again; and if the receiving program isn't waiting the event is often put on a queue (in memory) of pending events.
Of course often there's many processes involved, and the "user input event" might be forwarded from one process (e.g. input method editor) to another process (e.g. GUI) to another process (e.g. whichever window has keyboard focus). Also (for old legacy command line stuff) it might end up at a translation layer (e.g. terminal emulator) that converts the events into a character stream (stdin) while destroying most of the information (e.g. when a key is released).
The Language Specific Part
To get the event from high level code, it depends what the language is and sometimes also which library is being used. The most common is some kind of "getEvent()" that causes the program to fetch the next event from its queue (from memory); and may cause the program to wait (and not use any CPU time) if there isn't any event get yet. However, often that is buried further, such that you register a callback and then when something else calls "getEvent()" and when it receives an even it calls the callback you registered; so it might end up like (e.g. for Java) public boolean handleEvent(Event evt) { switch (evt.id) { case Event.KEY_PRESS: ....
The keyboards are mostly USB today. On most computers, including ARM computers, you have a USB controller implementing the xHCI specification developed by several major tech companies. If you google "xhci intel spec" or something similar, the first link or so should be a link to the full specification.
The xHCI spec requires implementations to be in the form of a PCI device. PCI is another spec which is developed by the PCI-Seg group. This spec specifies everything down to hardware requirements. It is not a free spec like xHCI. It is actually quite expensive to obtain (around 3k$).
The OS detects PCI devices using ACPI or similar specifications which can sometimes be board specific (especially for ARM because all x86 based computers implement ACPI). ACPI tables, found at conventionnal positions of RAM, mention where to find base addresses of the memory mapped configuration space of each PCI device.
The OS interacts with PCI devices using registers that are memory mapped in RAM. The OS reads/writes at the positions specified by ACPI tables and by the configuration spaces themselves to gather information about a certain device and to make the device do operations on its behalf. The configuration space of a PCI device have a general portion (the same for every device) and a more specific portion (device dependent). In the general portion, there are BAR registers that contain the address of the device dependent portion of the config space. Each implementer of the convention can do whatever they want with the device specific portion. The general portion must be somewhat similar for every device so that the OS can properly detect and initialize the device.
Today, each type of device have to respect a certain convention to work with current computers. For example, hard-disks will implement SATA and the board will have an AHCI (a PCI SATA controller). Similarly, keyboards will implement USB and the board has an xHCI.
The xHCI itself have complex interaction mechanisms. A summary for keyboards, is that the OS will "activate" the interrupt IN endpoint of the keyboard. The OS will then place transfer requests (TRBs) on a circular ring in RAM. For interrupt endpoints, the xHCI will read one transfer request per time interval specified by the OS in a specific xHCI register. When the xHCI executes a TRB (when it does the transfer), it will post an event to another circular ring in RAM called the Event Ring. When a new event is posted, the xHCI triggers an interrupt. In the interrupt handler, the OS will read the Event Ring to determine what happened. For a keyboard, the OS will probably see that a transfer was done and read the data. Most often, keyboards will return 8 bytes. Interpretation of each byte is the keys that were pressed at the moment of the transfer. The bytes contain conventional scancodes. So they are not directly in UTF-8 or ASCII format. There is one scancode per keyboard key. It is up to the OS to determine what to do depending on the keyboard key. For example, if the data says that the 'A' key is pressed, then the OS can look if the 'SHIFT' key is pressed to determine if the 'A' should be uppercase or lowercase. If the next report says that the 'A' key is not pressed than the OS should consider this as a release of the key (the user released the key). In other words, the keyboard is polled by the OS at certain intervals.
The interrupt handler will probably pass the key to other portions of the kernel and save the key to a process specific structure. The process will probably poll a lock protected buffer that will contain events. It could also be a system wide buffer that simply contains all events.
The higher level implementation probably varies between OS. If you understand the lower level inner workings than you can probably imagine how an OS will work. It is certain that the whole things is very complex because CPUs nowadays have several cores and caches and other complex mechanisms. The OS must make sure to work seamlessly with all these mechanisms so it is quite complex but achievable. To understand the whole thing, you'd probably need to understand the whole OS. It has lots of ramifications in other OS concepts.
I'm trying to interface a board level USB camera with a STM32 family microcontroller and send the image file to a central computer using CANbus. Just want to know if this is possible/ has been done before and how involved a task it would be.
I worked at a company where we sent live (low-resolution infra-red) video streams over CAN, but towards the end of my time there they shifted towards ethernet.
So it is possible, but certainly not what it is best suited for. The main advantages of CAN are that it is a multi-point, multi-master bus with built in arbitration. It is meant for short packets, typically 8 bytes (CAN FD allows you to increase that).
If your camera is USB, why not just get a USB repeater cable or USB-over-ethernet gateway?
If there is already a CAN network in place that you are piggy-backing onto then you need to consider what impact you will have on the existing traffic.
If you are starting from scratch then of course CAN will work but it would be an odd choice.
Depending on if its CAN or CANFD (Affects the maximum bulk transfer packet size) you have higher level protocol options to packetise your images and send them over canbus like any other block of data.
For just reguarlar CAN your after part of the standard called J1939.21 Data Link Layer, there are public versions of this floating around online, however due to the agreement when purchasing the standard, I am not able to share the specifics from what I have.
Its on pages 27-28 of the 2001 revision.
this may be a stupid question but it's been confusing me.
I've been watching some videos on Embedded Systems and they're talking about parallel ports, the data, the direction and the amount used.
I understand that the ports are connected to wires which feed other parts of the system or external devices. But I am confused because the lecture I watched says that to control a single LED would require 1 bit from 1 port.
My question is, what does the parallel port on an embedded system look like and how would you connect your own devices to the board? (say you made a device which sent 4 random bits to the port)
EDIT: I have just started learning so I might have missed a vital piece of information which would tie this altogether. I just don't understand how you can have an 8 bit port and only use 1 bit of it.
Firstly, you should know that the term "parallel port" can refer to a wide variety of connectors. People usually use the phrase to describe 25-pin connectors found on older PCs for peripherals like printers or modems, but they can have more or fewer pins than that. The Wikipedia article on them has some examples.
The LED example means that if you have an 8-bit parallel port, it will have 8 pins, so you would only need to connect one of the pins to an LED to be able to control it. The other pins don't disappear or anything strange, they can just be left unconnected. The rest of the pins will be either ones or zeros as well, but it doesn't matter because they're not connected. Writing a "1" or "0" to that one connected pin will drive the voltage high or low, which will turn the LED on or off, depending on how it's connected. You can write whatever you want to the other pins, and it won't affect the operation of the LED (though it would be safest to connect them to ground and write "0"s to them).
Here's an example:
// assume REG is a memory-mapped register that controls an 8-bit output
// port. The port is connected to an 8-pin parallel connector. Pin 0 is
// connected to an LED that will be turned on when a "1" is written to
// Bit 0 (the least-significant bit) of REG
REG = 0x01 // write a "1" to bit 0, "0"s to everything else
I think your confusion stems from the phrase "we only need one bit", and I think it's a justified confusion. What they mean is that we only need to control that one bit on the port that corresponds to our LED to be able to manipulate the LED, but in reality, you can't write just one bit at a time, so it's a bit (ha!) misleading. You (probably) won't find registers smaller than 8-bits anymore, so you do have to read/write the registers in whole bytes at a time, but you can mask off the bits you don't care about, or do read-modify-write cycles to avoid changing bits you don't intend to.
Without the context of a verbatim transcript of the videos in question, it is probably not possible to be precise about what they may have specifically referred to.
The term "parallel port" historically commonly refers to ports primarily intended for printer connections on a PC, conforming to the IEEE 1284 standard; the term distinguishing it from the "serial port" also used in some cases for printer connections but for two-way data communications in general. More generally however it can refer to any port carrying multiple simultaneous data bits on multiple conductors. In this sense that includes SDIO, SCSI, IDE, GPIB to name but a few, and even the processor's memory data bus is an example of a parallel port.
Most likely in the context of embedded systems in general, it may refer to a word addressed GPIO port, although it is not a particularly useful or precise term. Typically on microcontrollers GPIO (general purpose I/O) ports are word addressable (typically 8, 16, or 32 bits wide), all bits of a single GPIO port may be written simultaneously (or in parallel), with all bit edges synchronised so their states are set simultaneously.
Now in the case where you only want to access a single bit of a GPIO (to control an LED for example), some GPIO blocks allow single but access by having separate set/clear registers, while others require read-modify-write semantics of the entire port. ARM Cortex-M supports "bit-banding" which is an alternate address space where every word address corresponds to a single bit in the physical address space.
However bit access of a GPIO port is not the same as a serial port; that refers to a port where multiple data bits are sent one at a time, as opposed to multiple data bits simultaneously.
Moreover the terms parallel-port and serial-port imply some form of block or stream data transfer as opposed to control I/O where each bit controls one thing, such as your example of the LED on/off - there the LED is not "receiving data" it is simply being switched on and off. This is normally referred to as digital I/O (or DIO). In this context you might refer to a digital I/O port; a term that distinguishes it from analogue I/O where the voltage on the pin can be set or measured as opposed to just two states high/low.
I'm doing a USB device is to control stepper motors. I've done this before using a parallel port. because these ports do not exist in current motherboards, I decided to implement a USB communication between my device and the PC (host).
To achieve My objective, I endowed the freescale microcontroller the device with that has a USB module 12Mbps.
My USB device must receive 4 bytes (one for each motor driver) at a given time, because every byte is a step that should move the engine.
In the PC (Host) an application of user processes a text file with information and make the trajectory coordinates sending bytes at a certain rate for each motor (time is trivial to achieve the acceleration and speed of the motors) .
Using the parallel port was an easy the task because each byte is sent sequentially to a time determined by the user app.
doing a little research about full speed USB protocol understood that the frame is sent every 1ms.
then you can send 4 byte or many more every 1ms but I can not manage time like I did with the parallel port.
My microcontroller can send up to 64 bytes per frame (Based on transfer papers type Control, Bulk, Int, Iso ..).
question 1:
I want to know in what way I can send 4-byte packets faster than every 1 ms?
question 2:
What type of transfer can advise me for these type of devices?
Thanks.
Like Ricardo said, USB-serial will suffice.
As for the type of transfer, try implementing a CDC stack and use your SCI receiver to listen for PC commands. That will give you a receive buffer which will meet your needs.
Initialize your SCI (baud, etc)
Enable receiver and interrupt
On data receive, move it to your 4-byte command buffer
Clear receive buffer, wait for more
When you have all 4 bytes, fire off the steppers! Four bytes should take µs.
Check with Freescale to see if your processor is supported.
http://cache.freescale.com/files/microcontrollers/doc/support_info/USB_STACK_RELEASE_NOTES_V4.1.1.pdf?fpsp=1
There might even be some sample code to get you started.
-Cheers
I am achieving the same goal (driving/control CNC machines) like this:
the USB device is just synchronous I/O parallel port. Using continuous bulk transfer one pipe as input and one as output. This way I was able to achieve synchronous 64bit parallel communication with ~70KHz sample rate. It uses traffic around (i)4.27+(o)4.27 MBit/s that is limit for mine MCU and code. Bigger speeds cause jitter on the output due to USB events interrupts.
How to do it (on MCU side)
I have 2 FIFO's one for ingoing and one for outgoing data. I have timer interrupt occurring with sample rate frequency. In it I read the inputs and feed it to the first FIFO and read data from the other FIFO and send it to the outputs.
On top of that the USB task is called (inside the same interrupt) checking FIFO for sending to and incoming data from USB handling the transfer itself
I choose ATMEL AT32UC3A chips for this task. After a long and pain full research I decided these MCU's because they have enough memory for both FIFO's and program so no need for additional IC. It has FPGA package which can be used (BGA is not an option). It has HS USB (most USB MCU's have only FS like yours). It runs at 66MHz. It supports many interesting features (did interesting projects with it in the past) and of coarse I have experience with ATMEL MCU's from past
So if you want to achieve something similar then
start with bulk transfer (PC -> USB -> MCU -> output)
add FIFO if needed
do not know the sample rate you need. The old LPT's could handle from 80-196KHz depend on the manufactor. The modern ones are much much slower (which is silly and sad).
measure the critical sample rate
you need oscilloscope or very good hearing for this. The output data must be synchronous so no holes in it, no jitter, etc...
if any of these are present you have to lower the sample rate. Mine setup could handle even 1MHz sample rate but the USB jitter was present (sometimes USB event froze the sending for longer that one sample...) so I achieve only 70KHz of stable output.
if needed also inputs then add them
but only if the output is working as it should. Do not forget to lower the sample rate after this too ... Use separate bulk pipes and FIFOs for input and output.
I'm using USB for communication. Our device sends 100k/s data (ARM7, very small memory size), and the PC needs to receive and process it all.
My previous design was implemented as a mass storage device, and extended a command for the communication protocol. The PC software runs a thread with a loop to receive the data.
The issue is: sometime it loses data.
So we used another solution: usb sim com (RS232).
But I don't know whether or not the OS can contain that much data before I get it using MFC (or pyserial). How can I get/set the buffer size?
We regularly punch about 100KByte/sec through our USB CDC implementation, the PC is fast enough to receive all data. But it seems that the built-in limits are lower with usb-serial (CDC) than with mass-storage protocol (in our case ~600KB/s versus ~100KB/s CDC).
The PC receive thread should have a buffer that's "big enough".
Edit: I don't know Windows' Buffer sizes, or how to get them, though.