what is/where can i find more info on "HI2COUT" - hardware

looking to bit-bang the I2C interface of a MCP23017 with an ATtiny13A, a lot of places mention HI2COUT as a method to send data on the I2C bus but i have no clue if this is part of a language or a Library or even a description of what happens when called. so the questions:
1) where can i get info on HI2COUT?
2) if any one has ever interfaces with an MCP23017 can you post the proper sequence to set 1 (or all) pins as output and set HIGH? (this includes start, write address, write register IOCON, ..., stop, etc...)
3) this may be to "Hardware" like for stackoverflow if anyone knows of a site better suited for this question (or may have the answer) please let me know.

Do you mean you're interested in programming the ATtiny13A (so that it can talk to a target device, which happens to be a MCP23017 but that's not an important detail)?
Just guessing, HI2COUT might be the name of a memory-mapped register to output data to the I2C peripheral of a microprocessor. However, looking at the ATtiny13A data sheet and the MCP23017 data sheet, I can't see such a register named. Perhaps that is the name of a register for an I2C peripheral of a different type of microprocessor?
The MCP23017 has I2C hardware built-in--see section 1.3.2 "I2C Interface" starting on page 5 of the MCP23017 data sheet. It will tell you how to do I2C on that device. But assuming it's the ATtiny13A you want to program, it looks as though it has no I2C hardware, so as you say, bit-banging is needed.
I suggest doing an Internet search for "ATtiny13A i2c" and you should be able to find several examples.

Related

Writing and Loading Code from QSPI Flash Memory from RP2040

I bought an RP2040 board with 16MB QSPI Flash Memory in the board:
The Flash Memory w25q128 is connected to QSPI dedicated pins of the RP2040:
I was finding out how to access this data from the RP2040 datasheet:
But I didn't find out how to:
Initialize XIP memory in rp2040 boot;
To flash/access data into the w25q128; and
Run instructions from the External Flash Memory.
I was looking for sample codes at GitHub but I didn't find anything useful for these two items I want.
I also found the 2.6.3.1 section of the RP2040 datasheet but I don't know if XIP cache is what I am looking for...
Does someone have done anything related to that? I am a newbie in this subject, so I would like to sorry if I did something wrong.
Thanks in advance!
In the datasheet there is an hint that this configuration is done by the SDK automatically if certain conditions are met:
Following the github link we get to the assembly source for the second stage bootloader. This answers the question on how to initialize the memory. It's not done by you (unless you are not using the SDK).
Here are the configuration steps:
SSI stands for Synchronous Serial Interface and it's what has to be configured to use the flash as XIP. It's described in the datasheet:

How to change the sunlight level (with commands / datapacks) in Minecraft

I'm working on a server whereas a fun thing for late December I want to make the whole world darker. The Minecraft sun usually has a light level of 15. Is there any way to lower the sun's light level to something like 8 so mobs can spawn on any block not under sunlight?
Any solution like datapacks, commands, or server plugins would work, but I wouldn't be able to use any mods (it could be super simple if I could use mods, but others would have to download it).
Thanks.
So I found this plugin online, which basically allows you to change light levels for things at will, and might solve your problem. If it doesn't, I also found this forum post, which may also help(but it is a bit more complex). It suggests decompiling Bukkit.jar(or whatever server software you use, Spigot, Paper, etc.), then editing the code for which mobs can spawn in.
I'd suggest looking into ProtocolLib plugin to send packets to players. I don't know which packet lighting would be in but if you use a tool called "pakkit" (https://github.com/Heath123/pakkit) you can watch Minecraft packets and that might help you see what causes light changes (example: place a torch in front of you and see what packet is sent). Then if you know what packets are sent you can modify packets as they are sent to include custom lighting data.

OS X Programmatically Set Default Output Device

I have been attempting to change which audio device my computer sends sound to. My end goal is to create a program that can make my laptop output to its built-in speakers even when headphones are plugged into the headphone jack.
I stumbled across this project, but the methods it uses (specifically AudioHardwareSetProperty) are deprecated. It also just doesn't work (it will say it changed the output device, but sound will still go to my headphones).
CoreAudio seems very poorly documented and I could not find ANY code online that did not use that function. I would go with the deprecated function if it did what I wanted, but it doesn't. I'm unsure weather it's broken or just doesn't do what I think it does, but that really doesn't matter in the end.
I attempted to look at the comments on AudioHardwareSetProperty but all I found was this in the discussion section:
Note that the value of the property should not be considered changed until the
HAL has called the listeners as many properties values are changed
asynchronously. Also note that the same functionality is provided by the
function AudioObjectGetPropertyData().
This is obviously not true, since I know for a fact that AudioObjectGetPropertyData is used for getting information about one specific audio device.
Is what I am trying to do possible with CoreAudio?

Difference between uboot-uart.bin and uboot.bin?

I am trying to flash the very first u-boot binary file (uboot.bin) into blank NOR flash of a brand new blank board which has marvel 370 soc(ARM) using Teraterm(xmodem/ymodem/zmodem)
When I compile the uboot, I get two binaries like uboot-uart.bin and uboot.bin.
What is the difference between two binaries?
I have been instructed to make some dip switch changes and then load uboot-uart.bin first into the prototype board.
From manual I understand that the dip switch setting is to set "Boot from Uart" to Boot source list.
I am new to embedded and want to learn more about this from u-boot perspective. Where can I learn about this?
Would also like to know what these xmodem,ymodem,zmodem things are?
And would also like to learn how to customize u-boot for a custom board using marvel 370 soc(ARM)?
I would be happy if someone can point to good resources.
XModem itself is a quite simple protocol which is meant to send files over a serial link it is explained in detail here.
Most Marvell ARM-Chips in the last couple of years have the possibility to upload a binary via UART using the XModem protocol. There are two ways to do that.
By sending a special sequence to the chip during bootup (which can be done without any changes to the bootstrap options).
By setting up the bootstrap options accordingly (via DIP-Switches in your case)
In both cases the chip will then initiate an Xmodem-download. TeraTerm should have an option to upload files via the xmodem protocol. IIRC it is available under File/Transfer/XModem/Send.
If you know just send your "uboot-uart.bin" file to the Armada 370 (which will take some time). The SoC will now boot the file just like if it was loaded from NAND or any other source.
The only difference between your uboot-uart.bin and uboot.bin is most probably the special header which has to be put in front of the actual uboot-binary, it contains the bootdevice type the image was meant for, the address in memory where the image should be loaded to and a lot of board specific settings. The exact structure and content is usually explained in the very excellent datasheets from Marvell.
For customizing uboot I can only suggest to dig into the code provided by Marvell and change it according to your own board. You'll find the board specific files under boards/Marvell.

VGA programming without using interrupt (only registers)

I want to develop a VGA graphics driver (for Linux(Ubuntu)) with support for the basic primitives such as putpixel, drawline, fillrect and bitblt. I want to do it in protected mode.
I´ve been googling for a week and the following four links are the best I have found:
http://www.brackeen....vga/basics.html
http://www.osdever.n...VGA/vga/vga.htm
http://bos.asmhacker...sing%20bios.htm
Unfortunately, the first one uses a BIOS call so I cannot use it. The second link has lots of information on the VGA registers but no examples showing how to make them work together. The third example is a example to switch in 13h mode but i've tried it and nothing happened. Can you guys give me a hint? Thanks in advance!
--Vincenzo
my code at http://bos.asmhackers.net/docs/vga_without_bios/snippet_5/vga.php
works fine if you are in 32bit mode with full hardware access. Unfortunately I doubt that any Linux variant will let you directly access the VGA ports. I'm not sure how you develop this driver, but if you made sure that you have full access to the VGA ports it should work. In my example code I only switch between mode 0x03 and 0x13, but in the folders above you'll be able to find port values for most other common VGA modes, as well as C code to do the switch if you prefer that.
Christoffer code include files are found BOS operating system source code like text.inc and font8x16.inc
http://bos.asmhackers.net/downloads.php
This is coming many many years later but I think it's still very relevant and if somebody is struggling I hope they can find it useful.
First of all, it is completely possible to configure VGA only using registers without interrupts, as hard as it may be. A useful resource about registers and how to configure them can be found here, but unless you have a ton of time to spare to learn how to properly do all of it, move to the following section.
If you wish to really learn how to do it, I suggest going through with the documentation provided earlier. However, some of it is already done!
Chris Giese did a great job demonstrating exactly how to do this for MS-DOS system, and while you may think that doesn't help you, it really does.
Chris's code can be found here. If you want another useful codes check here as well.
Now, while it only works for MS-DOS it's actually easy to convert to other systems. The code already contains all data needed to configure the registers in many different modes. And that's the part that saves you a ton of time going through documentation.
The code uses functions outportb, inportb, which are MS-DOS functions, to write/read single byte to/from a port. Therefore, you have to redefine these functions to read/write for your own system. Redefinition complexity depends on the system you operate on.
In addition, you will also need to provide means to write to physical memory region between 0xA0000-0xBFFFF which corresponds to standard VGA memory area. Once you have that allocated, you need to also redefine the functions pokeb pokew peekb which will help you output things (text or pixel data) on the screen.
One last note: the code is already defined to work with many different modes including both text and display modes.