I am running a 5.10 mainline kernel (built by Yocto Dunfell) along with U-Boot 2021.07 on an i.MX6. I believe I have the HDMI working; if I specify an EDID binary file on the boot command (add drm.edid_firmware=1024x600.bin to CONFIG_CMDLINE), the framebuffer comes up at the right resolution and I can write into /dev/fb0 and see the display output change. We need to be able to support multiple displays though (we have 2 7" 1024x600 HDMI displays, a 10" 1280x800 HDMI and a 10" 1024x600 LVDS display that all need to be supported), so hardcoding the EDID data is not a long term solution.
What I am seeing though is that the system always seemingly comes up at 1024x768, and the other resolutions aren't noted in the modes file in the sysFS. I can dump the EDID data (I dump it using i2cdump 1 0x50) on all the displays and it appears to be correct (they all have the correct EDID header and stuff like that). The displays all come up at the proper resolution if I plug it into a Windows PC, so I don't have any reason to suspect the EDID data is garbage. I can post those dumps if needed.
I have tried adding mxcfb0:dev=hdmi to my U-Boot boot args (which show up if I cat /proc/cmdline) but that didn't appear to do anything.
I am checking the resolution using fbset -s
Related
This is probably a long shot question, but I try it anyway.
I'm developing hardware using PIC Microcontrollers (MicroChip). Communication is done through a FS USB 2.0 link.
I connect the microcontrollers to a Windows 10 Home edition, version 21H1, build 19043.1826. The processor is an AMD Ryzen 5 3600 6-Core Processor.
First I used the PIC18F45K50, for which everything worked fine from day one. But due to the shortages on the market, I now am experimenting with PIC18F47J53. Both microcontrollers are working fine, as I can (for example) control a MAX7219 controlled display (3 x 7-segment) and also control a bunch of LED's using an STP08CP05TTR. Clock timings seem also ok - I measured it with an oscilloscope.
These 2 microcontrollers are pretty much the same, at least for the core functionality such as USB. The differences that are relevant for the issue I'm reporting here are:
PIC18F45K50 uses internal clock of 8MHz, and has on board correction logic to keep clock synced for HS USB - this is a 5V processor
PIC18F47J53 uses a XTAL of 16MHz, all should be within the USB 2.0 specs - this is a 3.3V processor
I'm using the MPLab X IDE v5.45 with the MCC (MPLab Code Configurator) in which I setup the System Module (to set the correct clock frequencies including the 48MHz for USB) and where I configure the USB.
In both microcontrollers, the setup of the USB is exactly the same. I even checked the 4 files that are automatically generated by MCC, and except for the descriptors (I used different names), all is exactly the same.
When I connect the USB to my PC (same port), then the PIC18F45K50 works perfect. But the PIC18F47J53 gives error code 10.
This does not happen every time. For example, if I try 10 times (connect/disconnect the cable), then I had it 7 times. 1 time the device even didn't appear, and 2 other times I read "The device is working properly.". Although, in the latter case, my software that communicates with my controller isn't working, so there is still something wrong.
Based on the above, the first I would think of is some hardware issue. Although, the strange thing is that things like vendor ID (0x4D8), Product ID (0xA), BCD Device Release (0x100), Serial Number (12345678), etc... seem always to be read out correctly. If there would be a hardware problem, shouldn't I have more random issues with this as well? Or is this data read out in a slower mode than Full Speed (because that could of course explain this)?
Below are screenshots via "Device Manager / Ports (COM & LPT) / my serial device", then selecting the property in the Details.
If I compare the properties from the working microcontroller (PIC18F45K50) with the not working one (PIC18F47J53), it looks like all are exactly the same.
I also tried to compare the D- (CH1) and D+ (CH2) signals between the 2 microcontrollers with my oscilloscope. My USB knowledge is not detailed enough to interpret the signals, but what I can tell is that both look exactly the same to me, both timing wise and voltage level wise. Be aware that the CH2 signal on the PIC18F47J53 (D+), the second screenshot, is clipping in the picture below, but I measured it later and it shows the same voltage level as for the PIC18F45K50.
Does anybody here a single clue where I should look at in the first place? The good news is that I have a working and not working version, so I can start debugging step by step and compare. But some hints as where to start would be appreciated.
EDIT 24JUL2022
I did the measurement with my oscilloscope again. Now I soldered 2 wires to the USB port to be able to easily attach my probes. This time, both D- and D+ signals have a Vpp of about 3.3V. I put some cursors which also shows a pulse-width of about 84ns, which correlates with the USB HS frequency of 12MHz (should be 83.33ns).
I found the issue. The Vusb on my PIC18F47J53 had a bad (or was even not) connected. I gave it another touch of my soldering iron, and bingo! Now the "error 10" has disappeared completely, and each time I connect/disconnect it gives "This device is working properly.", and error 10 never appears. I now also see a continues signal on my oscilloscope - not one that is disappearing after a while. And I could send/receive already some commands.
I'm trying to tap the currently selected output audio device on macOS, so I basically have a pass through listener that can monitor the audio stream currently being output without affecting it.
I want to copy this data to a ring buffer in real time so I can operate on it separately.
The combination of Apple docs and (outdated?) SO answers are confusing as to whether I need to write a hacky kernel extension, can utilise CoreAudio for this, or need to interface with the HAL?
I would like to work in Swift if possible.
Many thanks
(ps. I had been looking at this and this)
I don't know about kernel extensions - their use of special "call us" signing certificates or the necessity of turning off SIP discourages casual exploration.
However you can use a combination of CoreAudio and HAL AudioServer plugins to do what you want, and you don't even need to write the plugin yourself, there are several open source versions to choose from.
CoreAudio doesn't give you a way to record from (or "tap") output devices - you can only record from input devices, so the way to get around this is to create a virtual "pass through" device (AudioServerPlugin), not associated with any hardware, that copies output through to input and then set this pass through device as default output and record from its input. I've done this using open source AudioServer Plugins like BackgroundMusic and BlackHole [TODO: add more].
To tap/record from the resulting device you can simply add an AudioDeviceIOProc callback to it or set the device as the kAudioOutputUnitProperty_CurrentDevice of an kAudioUnitSubType_HALOutput AudioUnit
There are two problems with the above virtual pass through device approach:
you can't your hear output anymore, because it's being consumed by the pass through device
changing default output device will switch away from your device and the tap will fall silent.
If 1. is a problem, then a simple is to create a Multi-Output device containing the pass through device and a real output device (see screenshot) & set this as the default output device. Volume controls stop working, but you can still change the real output device's volume in Audio MIDI Setup.app.
For 2. you can add a listener to the default output device and update the multi-output device above when it changes.
You can do most of the above in swift, although for ringbuffer-stowing from the buffer delivery callbacks you'll have to use C or some other language that can respect the realtime audio rules (no locks, no memory allocation, etc). You could maybe try AVAudioEngine to do the tap, but IIRC changing input device is a vale of tears.
SUMMARY: Cannot copy more than 32GB of files to a 128GB memory stick formatted under FAT32 or exFAT despite the fact that I can format the stick and ChkDsk is showing the correct results after formatting (and also when less than 32GB of files are on the stick). I cannot use NTFS because this stick is designed to transfer files to an iPhone and the app will not handle NTFS. See below for details.
DETAILS:
I have a 128GB memory stick which is designed to quickly transfer files between a computer and an iPhone. One end is a USB and the other plugs into the iPhone's lightning port. This particular type is extremely common and looks like a "T" when you unfold it (Amazon link: https://www.amazon.com/gp/product/B07SB12JHG ).
While this stick is not especially fast when I copy Windows data to it, the transfer rate to my iPhone is much better than the wireless alternatives.
Normally I'd format a large memory stick or USB drive in NTFS, but the app used to transfer files to my iPhone ("CooDisk") will only handle exFAT and FAT32. I've tried both. For exFAT formatting, I've tried both Windows 7 and 10, and for FAT32 I used a free product from RidgeCrop consulting (I can give you the link if you want).
As with all USB storage devices, my stick is formatted as a single active partition.
I do not have a problem formatting. After formatting, ChkDsk seems happy with both FAT32 and exFAT. The CooDisk app works fine with either. After formatting, all the space is ostensibly available for files.
My problem arises when populating the stick with files.
Whenever I get beyond 32GB in total space, I have various problems. Either the copy will fail, or ChkDsk will fail. (After running ChkDsk in 'fix' mode, every file created beyond the 32GB limit will be clobbered.) Interestingly, when I use the DOS copy command with "/v" (verify) it will flag an error for files beyond the 32GB limit, although DOS XCopy with "/v" keeps on going. GUI methods also die at 32GB.
Out of sheer desperation, I wrote a script that uses GNU's cp for Windows. Now I can copy more than 32GB of files and ChkDsk flags no errors. However files beyond the 32GB limit end up being filled with binary zeros despite the fact that they appear as they should in a directory or Windows file explorer listing. (Weird, isn't it?)
I have also tried various allocation unit sizes from 4K all the way up to 64K and attempted this with three different Windows OSs (XP, Win7, and Win10).
Let me emphasize: there is no problem with the first 32GB of files copied to the stick regardless of: whether I use exFAT or FAT32; my method of copying; and my choice of AU size.
Finally, there is nothing in these directories that would bother a FAT32 or exFAT system: (a) file and directory names are short (well under 100 characters); (b) directory nesting is minimal (no more than 5 levels); (c) files are small (nothing close to a GB); and directories have relatively few files (nowhere close to 200, for those of you who recall the old FAT limit of 512 files per directory :)
The only platform I haven't yet tried is using an aging MacBook that someone gave to me. I'm not terribly good with Macs, but I would rather not be dependent on it (it's 13 years old, although MacBooks are built like tanks).
Also, is it possible that FAT32 and exFAT don't allow more than 32GB on an active partition (I can find no such limitation documented anywhere, in fact in my experience USB storage devices are always bootable - as was the original version of my stick)?
Any ideas??
I have a project that needs to be compatible with STM32F1s and STM32F4s. I'm starting with a basic project that can use GPIOs and am now trying to get USB HID support. I have USB HID working on STM32F4s with another project using the standard peripheral drivers and USB OTG, but am having a difficult time with the HAL drivers. No matter what I've tried the USB device keeps showing up as an Unknown Device in windows. Where can I best start debugging this issue? Stepping through the code with an SWD makes it seem like the board seems to be working as it should. As far as I can tell the endpoints and descriptors for HID are correct.
Use STM32CubeMX to setup USB for you. Then you need to change the heap size, because the default one is not big enough. For some reason, you cannot change the heap size from STM32CubeMX. To change it, you need to edit the start up file (startup_stm32f4.....s) and find the line:
Heap_Size EQU 0x00000200
and change the value to a bigger one, for example:
Heap_Size EQU 0x00002000
I am putting together a very simple device for developing C++ applications on the go. Something like a a psp, size wise, with a small keyboard to write code and LCD screen for the output. This device has also a DVI out, so I can output to screen too.
Now, I am looking for a very lightweight C++ IDE for X11 (the device runs on a Linux variant of OpenEmbedded); something that is able to work decently on both the DVI monitor but mainly on the small lcd screen (it is a 4.3 inch screen with a 480x272 resolution).
I am aware that anything under 1024x768 is just pure joy for pain lovers, but the device will be used only in particular situations; when a laptop/netbook is not feasible.
While using the DVI out, I have no problems running Gnome, the screen is big enough to allow me to work comfortably. the problem is that it won't scale very well on the 480x272 screen.
The first idea was to use nano and g++, without even loading X11, but I need to have something with code completion and a minimal UI to do the standard operations (build, run, debugging step by step, breakpoints), if possible. The memory is not that big (256 Mb), so the smaller the IDE, the better it is.
What would you suggest, other than the approach via text editor and g++ ? I am new to the embedded linux world, I use Eclipse on Ubuntu, but that one is incredibly huge and on such small device would just kill it. On Windows I would use Dev-C++, while on Mac I use either code:blocks or Xcode, but I don't really need all these features for the device.
Thanks in advance
I had the same problem, developing on an ODroid (http://hardkernel.com) running Linux on an ARM processor. For a while I used Netbeans, but it was too heavy, so I finally gave up and wrote my own open source IDE for the purpose.
https://github.com/amirgeva/coide