To get detailed information about a Smart Battery of an Apple MacBook is pretty easy: e.g. via Terminal command:
$ ioreg -b -w 0 -f -r -c AppleSmartBattery
One element of the result is the "ManufactureDate"; you get an Integer like 19666. A bitfield where the 16 bits represents the date (day, month, year) ...
/* kBManufactureDateCmd
*/ Date is published in a bitfield per the Smart Battery Data spec rev 1.1 * in section 5.1.26
*/ Bits 0...4 => day (value 1-31; 5 bits)
*/ Bits 5...8 => month (value 1-12; 4 bits)
*/ Bits 9...15 => years since 1980 (value 0-127; 7 bits)
*/
... as described here: AppleSmartBatteryCommands.h or here Smart Battery Specifications
But, now on the macOS Big Sur beta, the result of the request for battery data and the "ManufactureDate" element is not a 5-digit integer (16 bits) anymore, it's a value like: 54091727255090 (46 bits)
Any ideas what changed? Maybe it's just a bug in the Beta? I have an app on the Mac App Store.
Related
I wish to read byte sequences that will not decode as valid UTF-8, specifically byte sequences that correspond to high and low surrogates code points. The result should be a raku string.
I read that, in raku, the 'utf8-c8' encoding can be used for this purpose.
Consider code point U+D83F. It is a high surrogate (reserved for the high half of UTF-16 surrogate pairs).
U+D83F has a byte sequence of 0xED 0xA0 0xBF, if encoded as UTF-8.
Slurping a file? Works
If I slurp a file containing this byte sequence, using 'utf8-c8' as the encoding, I get the expected result:
echo -n $'\ud83f' >testfile # Create a test file containing the byte sequence
myprog1.raku:
#!/usr/local/bin/raku
$*OUT.encoding('utf8-c8');
print slurp('testfile', enc => 'utf8-c8');
$ ./myprog1.raku | od -An -tx1
ed a0 bf
✔️ expected result
Slurping a filehandle? Doesn't work
But if I switch from slurping a file path to slurping a filehandle, it doesn't work, even though I set the filehandle's encoding to 'utf8-c8':
myprog2.raku
#!/usr/local/bin/raku
$*OUT.encoding('utf8-c8');
my $fh = open "testfile", :r, :enc('utf8-c8');
print slurp($fh, enc => 'utf8-c8');
#print $fh.slurp; # I tried this too: same error
$ ./myprog2.raku
Error encoding UTF-8 string: could not encode Unicode Surrogate codepoint 55359 (0xD83F)
in block <unit> at ./myprog2.raku line 4
Environment
Edit 2022-10-30: I originally used my distro's package (Fedora Linux 36: Rakudo version 2020.07). I just downloaded the latest Rakudo binary release (2022.07-01). Result was the same.
$ /usr/local/bin/raku --version
Welcome to Rakudo™ v2022.07.
Implementing the Raku® Programming Language v6.d.
Built on MoarVM version 2022.07.
$ uname -a
Linux hx90 5.19.16-200.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Oct 16 22:50:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 36 (Thirty Six)
Release: 36
Codename: ThirtySix
I'm using a USB CD/DVD drive without built-in sound decoder and controlling it via ALSA, which already works. The host is a Raspberry Pi 3B with the current Raspbian. Here is the corresponding config file:
pi#autoradio:/etc $ cat asound.conf
pcm.dmixer {
type dmix
ipc_key 1024
ipc_perm 0666
slave {
pcm "hw:0,0"
period_time 0
period_size 1024
buffer_size 4096
rate 192000
format S32_LE
channels 2
}
bindings {
0 0
1 1
}
}
pcm.dsnooper {
type dsnoop
ipc_key 2048
ipc_perm 0666
slave
{
pcm "hw:0,0"
period_time 0
period_size 1024
buffer_size 4096
rate 192000
format S32_LE
channels 2
}
bindings {
0 0
1 1
}
}
pcm.duplex {
type asym
playback.pcm "dmixer"
capture.pcm "dsnooper"
}
pcm.!default {
type plug
slave.pcm "duplex"
}
ctl.!default {
type hw
card 0
}
To read the music from CD-DA, I'm gonna use the CDIO++ library. Its cd-info utility recognises both the drive, and the audio CD:
pi#autoradio:/etc $ cd-info
cd-info version 2.1.0 armv7l-unknown-linux-gnueabihf
CD location : /dev/cdrom
CD driver name: GNU/Linux
access mode: IOCTL
Vendor : MATSHITA
Model : CD-RW CW-8124
Revision : DA0D
Hardware : CD-ROM or DVD
Can eject : Yes
Can close tray : Yes
Can disable manual eject : Yes
Can select juke-box disc : No
Can set drive speed : No
Can read multiple sessions (e.g. PhotoCD) : Yes
Can hard reset device : Yes
Reading....
Can read Mode 2 Form 1 : Yes
Can read Mode 2 Form 2 : Yes
Can read (S)VCD (i.e. Mode 2 Form 1/2) : Yes
Can read C2 Errors : Yes
Can read IRSC : Yes
Can read Media Channel Number (or UPC) : Yes
Can play audio : Yes
Can read CD-DA : Yes
Can read CD-R : Yes
Can read CD-RW : Yes
Can read DVD-ROM : Yes
Writing....
Can write CD-RW : Yes
Can write DVD-R : No
Can write DVD-RAM : No
Can write DVD-RW : No
Can write DVD+RW : No
__________________________________
Disc mode is listed as: CD-DA
I've already got some code to send the PCM data to the sound card and some insight regarding the (rather poorly documented) CDIO API (I know that the readSectors() method is used for reading sound data from the CD sector after sector), but not really a clue on how to hand over the data from the CD-DA input to the ALSA output routine correctly.
Please nopte that mplayer is off-limits to me as this routine will be a part of a larger solution.
Any help would be greatly appreciated.
UPDATE: Does the different block size of an audio CD (2,352 bytes) and of the sound output (910 bytes, at least in my particular case) matter?
CD audio data is just two channels of little-endian 16-bit samples at 44.1 kHz.
If you output the data to the standard output, you can pipe it into your sound-playing program, or aplay:
./my-read-cdda | ./play 44100 2 99999
./my-read-cdda | aplay --file-type raw --format cd
If you want to do everything in a single program, replace the read(0, ...) with readSectors(). (The buffer size does not need to have any relation with ALSA's period size or buffer size.)
When I synthesize an empty circuit using Yosys and arachne-pnr, I get a few irregular bits:
.io_tile 6 17
IoCtrl IE_1
.io_tile 6 0
IoCtrl REN_0
IoCtrl REN_1
These are also part of every other file I could generate so far. Since an unused I/O tile has both IE bits set, I read this as:
for IE/REN block 6 17 0, the input buffer is enabled
for IE/REN block 6 0 0, the input buffer is enabled and the pull-up resistor is disabled
for IE/REN block 6 0 1, the input buffer is enabled and the pull-up resistor is disabled
However, according to the documentation, there is no IE/REN block 6 17 0.
What is the meaning of these bits? If the IE bit of block 6 17 0 is unset because the block doesn't exist, why aren't the bits of the other blocks which don't exist unset, too? The other IE/REN blocks seem to correspond to I/O blocks 6 0 1 and 7 0 0. What do these blocks do, and why are they always configured as inputs?
The technology library entry for SB_IO does not mention the IE bit. How is it related to the PIN_TYPE parameter settings?
When I use an I/O pin as an input, the REN bit is set (the pull-up resistor disabled). This suggests that the pull-up resistors are primarily intended to keep unused pins from floating, not to provide a pull-up resistor for conditionally connected inputs (e.g. buttons). Is this assumption correct? Would it be ok to use the internal pull-up resistors for that purpose?
The technology library says the following:
defparam IO_PIN_INST.PULLUP = 1'b0;
// By default, the IO will have NO pull up.
// This parameter is used only on bank 0, 1,
// and 2. Ignored when it is placed at bank 3
Does this mean bank 3 doesn't have pull-up resistors, or merely that they can't be re-enabled using Verilog? What would happen if I clear that bit in the ASCII bitstream manually? (I'd be trying this, but the iCEstick evaluation board doesn't make any pin on bank 3 accessible – a coincidence? – and I'm not sure if I want to mess with the hardware yet.)
When I use an I/O pin as an output, the IE bit isn't cleared, but the input pin function is set to PIN_INPUT. What effect does this have, and why is it done?
The default behavior for unused IO pins is to enable the pullup resistors and disable input enable. On iCE40 1k chips this means IE_0 and IE_1 are set and REN_0 and REN_1 are cleared in an unused IO tile. (On 8k chips IE_* is active high, i.e. all bits are cleared in an unused IO tile on an 8k chip.)
icebox_explain by default hides tiles that have "uninteresting" contents. (Run icebox_explain -A to disable this feature.)
It looks like arachne-pnr does not set those bits for IO pins that are not available in the current package. Thus you get some unusual bit pattern in some IO tiles that contain IE/REN bits for IO blocks not connected to any package pin.
This is what a "normal" unused IO tile looks like on the 1k architecture:
$ icebox_explain -mAt '1 0' example.asc
Reading file 'example.asc'..
Fabric size (without IO tiles): 12 x 16
.io_tile 1 0
B0 ------------------
B1 ------------------
B2 ------------------
B3 ------------------
B4 ------------------
B5 ------------------
B6 ---+--------------
B7 ------------------
B8 ------------------
B9 ---+--------------
B10 ------------------
B11 ------------------
B12 ------------------
B13 ------------------
B14 ------------------
B15 ------------------
IoCtrl IE_0
IoCtrl IE_1
Would it be ok to use the internal pull-up resistors for that purpose?
Yes.
The technology library entry for SB_IO does not mention the IE bit. How is it related to the PIN_TYPE parameter settings?
When D_IN_0 or D_IN_1 from SB_IO is connected to something, then this implies IE.
When I use an I/O pin as an output, the IE bit isn't cleared
Note that IE is active low on 1k chips and active high on 8k chips. When you use an I/O pin as output-only pin on a 1k device, then the corresponding IE bit should be set, otherwise it should be cleared.
I found the explanation, so I'm sharing it here for future reference:
.io_tile 6 17
IoCtrl IE_1
I/O block 16 7 0 is connected to a pin in some packages but not in the TQFP package.
.io_tile 6 0
IoCtrl REN_0
IoCtrl REN_1
This corresponds to I/O blocks 6 0 1 and 7 0 0 (pins 49 and 50) into whose input paths the PLL PORTA and PORTB clocks are fed, respectively.
to be able to run my eSata Sheevaplug with Debian Wheezy I had to upgrade U-Boot to the DENX version.
As step-by-step guide I used this read from Martin Michlmayr. I did the upgrade using screen and a USB stick at the plug.
The upgrade went good and after resetting I got the plug started with the new version.
Marvell>> version
U-Boot 2013.10 (Oct 21 2013 - 21:06:56)
Marvell-Sheevaplug - eSATA - SD/MMC
gcc (Debian 4.8.1-9) 4.8.1
GNU ld (GNU Binutils for Debian) 2.23.52.20130727
Marvell>>
In the guide is written to set machid environment variable and MAC address.
But unfortunatly saveenv fails due to bad blocks in the NAND. I tried different versions of U-Boot also the one provided by NewIT. All behave the same way.
Marvell>> setenv machid a76
Marvell>> saveenv
Saving Environment to NAND...
Erasing NAND...
Skipping bad block at 0x00060000
Writing to NAND... FAILED!
There are some blocks marked as bad, which might be normal - by NewIT.
Marvell>> nand info
Device 0: nand0, sector size 128 KiB
Page size 2048 b
OOB size 64 b
Erase size 131072 b
Marvell>> nand bad
Device 0 bad blocks:
00060000
00120000
00360000
039c0000
0c300000
10dc0000
1ac40000
1f1c0000
Has someone a clue what the problem is and what I need to change to be able saving environment variables in u-boot?
Thanks,
schibbl
Due to configuration of environment variable storage at NAND, the sector size of 128k and a bad block mapping the environment variable storage adress it is not possible to write env to NAND.
Marvell>> nand bad
Device 0 bad blocks:
00060000
...
include/configs/sheevaplug.h which points perfectly to the bad block.
/*
* max 4k env size is enough, but in case of nand
* it has to be rounded to sector size
*/
#define CONFIG_ENV_SIZE 0x20000 /* 128k */
#define CONFIG_ENV_ADDR 0x60000
#define CONFIG_ENV_OFFSET 0x60000 /* env starts here */
Because of unused sector 0x80000 to 0x9FFFF I moved env storage there.
/*
* max 4k env size is enough, but in case of nand
* it has to be rounded to sector size
*/
#define CONFIG_ENV_SIZE 0x20000 /* 128k */
#define CONFIG_ENV_ADDR 0x80000
#define CONFIG_ENV_OFFSET 0x80000 /* env starts here due to bad block */
Beware! We have to ensure our compiled u-boot.kwb is less then 384k. Otherwise we will write u-boot to bad block marked memory and will brick the device.
Best way to recompile with custom env address, is to use Michlmayrs sources, which includes patches for mmc and e-sata support.
This question already has answers here:
Solution needed for building a static IDT and GDT at assemble/compile/link time
(1 answer)
How to do computations with addresses at compile/linking time?
(2 answers)
Closed 5 days ago.
I'm working on a project that has tight boot time requirements. The targeted architecture is an IA-32 based processor running in 32 bit protected mode. One of the areas identified that can be improved is that the current system dynamically initializes the processor's IDT (interrupt descriptor table). Since we don't have any plug-and-play devices and the system is relatively static, I want to be able to use a statically built IDT.
However, this proving to be troublesome for the IA-32 arch since the 8 byte interrupt gate descriptors splits the ISR address. The low 16 bits of the ISR appear in the first 2 bytes of the descriptor, some other bits fill in the next 4 bytes, and then finally the last 16 bits of the ISR appear in the last 2 bytes.
I wanted to use a const array to define the IDT and then simply point the IDT register at it like so:
typedef struct s_myIdt {
unsigned short isrLobits;
unsigned short segSelector;
unsigned short otherBits;
unsigned short isrHibits;
} myIdtStruct;
myIdtStruct myIdt[256] = {
{ (unsigned short)myIsr0, 1, 2, (unsigned short)(myIsr0 >> 16)},
{ (unsigned short)myIsr1, 1, 2, (unsigned short)(myIsr1 >> 16)},
etc.
Obviously this won't work as it is illegal to do this in C because myIsr is not constant. Its value is resolved by the linker (which can do only a limited amount of math) and not by the compiler.
Any recommendations or other ideas on how to do this?
You ran into a well known x86 wart. I don't believe the linker can stuff the address of your isr routines in the swizzled form expected by the IDT entry.
If you are feeling ambitious, you could create an IDT builder script that does something like this (Linux based) approach. I haven't tested this scheme and it probably qualifies as a nasty hack anyway, so tread carefully.
Step 1: Write a script to run 'nm' and capture the stdout.
Step 2: In your script, parse the nm output to get the memory address of all your interrupt service routines.
Step 3: Output a binary file, 'idt.bin' that has the IDT bytes all setup and ready for the LIDT instruction. Your script obviously outputs the isr addresses in the correct swizzled form.
Step 4: Convert his raw binary into an elf section with objcopy:
objcopy -I binary -O elf32-i386 idt.bin idt.elf
Step 5: Now idt.elf file has your IDT binary with the symbol something like this:
> nm idt.elf
000000000000000a D _binary_idt_bin_end
000000000000000a A _binary_idt_bin_size
0000000000000000 D _binary_idt_bin_start
Step 6: relink your binary including idt.elf. In your assembly stubs and linker scripts, you can refer to symbol _binary_idt_bin_start as the base of the IDT. For example, your linker script can place the symbol _binary_idt_bin_start at any address you like.
Be careful that relinking with the IDT section doesn't move anyting else in your binary, e.g. your isr routines. Manage this in your linker script (.ld file) by puting the IDT into it's own dedicated section.
---EDIT---
From comments, there seems to be confusion about the problem. The 32-bit x86 IDT expects the address of the interrupt service routine to be split into two different 16-bit words, like so:
31 16 15 0
+---------------+---------------+
| Address 31-16 | |
+---------------+---------------+
| | Address 15-0 |
+---------------+---------------+
A linker is thus unable to plug-in the ISR address as a normal relocation. So, at boot time, software must construct this split format, which slows boot time.