Why some kernel modules are marked (F) - module

Some of the modules listed in /proc/modules are marked (F) - I think means forcefully loaded.
I am sure they were not. If I unload and the reload the module the (F) disappears.
For example:
squashfs 47871 1 - Live 0xffffffffa0100000 (F)
ast 56335 1 - Live 0xffffffffa00c4000 (F)
ttm 79926 1 ast, Live 0xffffffffa00e5000 (F)
drm_kms_helper 50129 1 ast, Live 0xffffffffa00d7000 (F)
drm 272304 3 ast,ttm,drm_kms_helper, Live 0xffffffffa0080000 (F)
i2c_algo_bit 13250 1 ast, Live 0xffffffffa0053000 (F)
i2c_core 38513 6 i2c_dev,i2c_i801,ast,drm_kms_helper,drm,i2c_algo_bit, \
Live 0xffffffffa0075000 (F)
usb_storage 56610 0 - Live 0xffffffffa005d000 (F)
mpt2sas 189642 16 - Live 0xffffffffa0023000 (F)
scsi_transport_sas 39231 1 mpt2sas, Live 0xffffffffa0012000 (F)

Use the source, Luke!
static size_t module_flags_taint(struct module *mod, char *buf)
{
size_t l = 0;
if (mod->taints & (1 << TAINT_PROPRIETARY_MODULE))
buf[l++] = 'P';
if (mod->taints & (1 << TAINT_OOT_MODULE))
buf[l++] = 'O';
if (mod->taints & (1 << TAINT_FORCED_MODULE))
buf[l++] = 'F';
if (mod->taints & (1 << TAINT_CRAP))
buf[l++] = 'C';
if (mod->taints & (1 << TAINT_UNSIGNED_MODULE))
buf[l++] = 'E';
/*
* TAINT_FORCED_RMMOD: could be added.
* TAINT_CPU_OUT_OF_SPEC, TAINT_MACHINE_CHECK, TAINT_BAD_PAGE don't
* apply to modules.
*/
return l;
}
The modules definitely were forced.
(This can also happen when the kernel was compiled with version information, but the modules were not.)

Related

STM32 Timer Input Frequency Configuration (PCLKx_Timer)

I have an issue configuring the timer frequency input. I did read over the reference manual a couple of times and afterwards searched through the web to find an solution.
template <uintptr_t BASE>
void hsi_config(Config config) {
auto base = reinterpret_cast<volatile RCC_TypeDef *>(BASE);
base->CR |= RCC_CR_HSION;
while (!(base->CR & RCC_CR_HSIRDY))
;
// config flash latency
FLASH->ACR |= config.ahb_flash_latency;
// for now always use TIMPRE = 0
base->DCKCFGR1 &= ~RCC_DCKCFGR1_TIMPRE;
base->CFGR |= (config.ahb_prescaler << 4U) | (config.apb1_prescaler << 10U) | (config.apb2_prescaler << 13U);
// config PLL
base->PLLCFGR = 0;
base->PLLCFGR |= (config.pllm << 0U) | (config.plln << 6U) | (RCC_PLLCFGR_PLLSRC_HSI << 22U) | (config.pllp << 16U) | (config.pllq << 24U);
// activate PLL
base->CR |= RCC_CR_PLLON;
while (!(base->CR & RCC_CR_PLLRDY))
;
// Change HSI to PLL
base->CFGR &= ~RCC_CFGR_SW;
base->CFGR |= RCC_CFGR_SW_PLL;
while ((base->CFGR & RCC_CFGR_SWS) != RCC_CFGR_SWS_PLL)
;
}
With example the following values:
hsi = 16MHz
pllm = 8
plln = 96
pllp = 2
=> sysclk = ((16MHz / 8) * 96) / 2 = 96MHz
ahb_prescaler = 1
=> hclk = 96MHz
apb1_prescaler = 2
=> pckl1 = 48MHz
=> pclk1_timer = pclk1 * 2 (As apb1 != 1 and TIMPRE = 0) (Rule from TIMPRE Register)
As a simple app to validate the frequency I used a timer which just counts to 1000 and then reloads.
Input 96MHz
PSC = 47999 => 2KHz frequency
ARR = 2000 => Overflow once a second
However I get an overflow after 3 seconds => the timer is not at 96MHz but at 32MHz.
I also checked that with other configurations like pclk1_timer = 48MHz it always seems that the timer input is at 32MHz. (Overflow then happens after 1.5s)
Am I missing something in my configuration?

Understanding RiscV objdump

I am examining the objdump of a C file that I have compiled using the following commands:
riscv64-unknown-elf-gcc -O0 -o maxmul.o maxmul.c
riscv64-unknown-elf-objdump -d maxmul.o > maxmul.dump
strangely (or not) the addresses appear not to be aligned on 32-bit words but actually on a 16-bit boundary.
Can anyone please explain me why?
Thanks.
objdump excerpt:
00000000000101da <main>:
101da: 7155 addi sp,sp,-208
101dc: e586 sd ra,200(sp)
101de: e1a2 sd s0,192(sp)
101e0: 0980 addi s0,sp,208
...
C-code:
int main()
{
int first[3][3], second[3][3], multiply[3][3];
int golden[3][3];
int sum;
first[0][0] = 1; first[0][1] = 2; first[0][2] = 3;
first[1][0] = 4; first[1][1] = 5; first[1][2] = 6;
first[2][0] = 7; first[2][1] = 8; first[2][2] = 9;
second[0][0] = 9; second[0][1] = 8; second[0][2] = -7;
second[1][0] = -6; second[1][1] = 5; second[1][2] = 4;
second[2][0] = 3; second[2][1] = 2; second[2][2] = -1;
golden[0][0] = 6; golden[0][1] = 24; golden[0][2] = -2;
golden[1][0] = 24; golden[1][1] = 69; golden[1][2] = -14;
golden[2][0] = 42; golden[2][1] = 1140; golden[2][2] = -26;
int i, ii, iii;
for (i = 0; i < 3; i++) {
for (ii = 0; ii < 3; ii++) {
for (iii = 0; iii < 3; iii++) {
//printf("first[%d][%d] * second[%d][%d] \n", i, iii, iii, ii);
//printf("%d * %d (%d,%d)\n", first[i][ii], second[ii][i], i, ii);
sum += first[i][iii] * second[iii][ii];
}
//printf("sum = %d\n", sum);
multiply[i][ii] = sum;
sum = 0;
}
}
int c, d;
int err;
for ( c = 0; c < 3; c++) {
for ( d = 0; d < 3; d++) {
//printf("%d\t", multiply[c][d]);
if (multiply[c][d] != golden[c][d]) {
fail(golden[c][d], multiply[c][d]);
err++;
}
}
//printf("\n");
}
if (err == 0) {
pass();
}
return 0;
}
I am suspecting that your gcc compiles by default with the compressed instruction format where instructions can be 16b & 32b intermix - in such case, 16b instructions are 16b aligned as you can see in the disassembled code.
Objdump provides the address, the encoding, and the mnemonics ; the encoding in your case is always 16b, which means that the compiler have selected 16b instructions when possible.
By enabling verbose mode (-verbose), you can see that, by default,-march=rv64imafdc and -mabi=lp64d. The default targetted ISA is the compressed one, and the targetted ABI requires Double floats extension.
By setting -march=rv64imafd and letting ABI unchanged, gcc successfully compiles using instructions that are only 32b because compressed ISA is no more enabled.
The addresses of instruction are then always 32b aligned.
When compiling (or assembling) to RV64GC or RV32GC (or another target that enables the "C" Standard Extension Compressed Instructions), the compiler (or assembler) automatically replaces some instructions with compressed ones.
Non-compressed instructions are encoded in 32 bit, while compressed instructions are encoded in 16 bit.
When a compressed instruction is emitted it changes the alignment for the next instruction. Either from 32 bit to 16 bit or from 16 bit to 32 bit. That means not only 16 bit wide instructions may be aligned to a 16 bit address but also 32 bit wide ones. IOW both types of instructions (compressed and normal) are tightly packed side by side.
By default, objdump -d doesn't explicitly indicate that an instruction is compressed because it uses the same mnemonic as for the uncompressed variant. Although the number of bytes in the displayed raw instruction gives it away (4 vs. 2 bytes).
However, you can tell objdump to use separate mnemonics for compressed instructions such that they are more easily recognizable (those start with c. then), e.g.:
$ riscv64-unknown-elf-objdump -d -M no-aliases rotate
[..]
101e4: 00d66533 or a0,a2,a3
101e8: 8082 c.jr ra
00000000000101ea <rotr>:
101ea: 00b55633 srl a2,a0,a1
[..]
Note that with the switch -M no-aliases pseudo-instructions aren't displayed anymore, but the corresponding instruction(s) instead.

S3c2440(ARM9) spi_read_write Flash Memory

I am working on SPI communication.Trying to communicate SST25VF032B(32 MB microchip SPI Flash).
When I am reading the Manufacturer Id it shows MF_ID =>4A25BF
but originally it is MF_ID =>BF254A. I am getting it simply reverse, means first bite in 3rd and 3rd byte in first.
What could be the possible reason for that?
My SPI Init function is here:
//Enable clock control register CLKCON 18 Bit enables SPI
CLKCON |= (0x01 << 18);//0x40000;
printk("s3c2440_clkcon=%08ld\n",CLKCON);
//Enable GPG2 Corresponding NSS port
GPGCON =0x1011;//010000 00 01 00 01
printk("s3c2440_GPGCON=%08ld\n",GPGCON);
SPNSS0_ENABLE();
//Enable GPE 11,12,13,Corresponding MISO0,MOSI0,SCK0 = 11 0x0000FC00
GPECON &= ~((3 << 22) | (3 << 24) | (3 << 26));
GPECON |= ((2 << 22) | (2 << 24) | (2 << 26));
//GPEUP Set; all disable
GPGUP &= ~(0x07 << 2);
GPEUP |= (0x07 << 11);
//SPI Register section
//SPI Prescaler register settings,
//Baud Rate=PCLK/2/(Prescaler value+1)
SPPRE0 = 0x18; //freq = 1M
printk("SPPRE0=%02X\n",SPPRE0);
//polling,en-sck,master,low,format A,nomal = 0 | TAGD = 1
SPCON0 = (0<<5)|(1<<4)|(1<<3)|(0<<2)|(0<<1)|(0<<0);
printk("SPCON1=%02ld\n",SPCON0);
//Multi-host error detection is enabled
SPPIN0 = (0 << 2) | (1 << 1) | (0 << 0);
printk("SPPIN1=%02X\n",SPPIN0);
//Initialization procedure
SPTDAT0 = 0xff;
My spi_read_write function as follows:
static char spi_read_write (unsigned char outb)
{
// Write and Read a byte on SPI interface.
int j = 0;
unsigned char inb;
SPTDAT0 = outb;
while(!SPI_TXRX_READY) for(j = 0; j < 0xFF; j++);
SPTDAT0 = outb;
//SPTDAT0 = 0xff;
while(!SPI_TXRX_READY) for(j = 0; j < 0xFF; j++);
inb = SPRDAT0;
return (inb);
}
My Calling function is:
MEM_1_CS(0);
spi_read_write(0x9F);
m1 = spi_read_write(0x00);
m2 = spi_read_write(0x00);
m3 = spi_read_write(0x00);
MEM_1_CS(1);
printk("\n\rMF_ID =>%02X-%02X-%02X",m1,m2,m3);
Please guide me what to do?
Thanks in Advance!!
There's no apparent problem with the SPI function.
The problem is with your printing function.
Arm is little endian processor. it keeps the bytes reversed in memory.
You need to print it reverse order and you'll be fine.
I was banging my head on this from last couple of days and finally I find the solution. All I needed to change my spi_read_write function as follows.
static char spi_read_write (unsigned char outb)
{
int j = 0;
unsigned char inb;
while(!SPI_TXRX_READY) for(j = 0; j < 0xFF; j++);
SPTDAT0 = outb;
while(!SPI_TXRX_READY) for(j = 0; j < 0xFF; j++);
inb = SPRDAT0;
return (inb);
}
CHANGES MADE:
First of all we have to check whether the SPI_TXRX_READY then fill the register with the value SPTDAT0 = outb;.
Thanks all for your kind support.

Determine Position of Most Signifiacntly Set Bit in a Byte

I have a byte I am using to store bit flags. I need to compute the position of the most significant set bit in the byte.
Example Byte: 00101101 => 6 is the position of the most significant set bit
Compact Hex Mapping:
[0x00] => 0x00
[0x01] => 0x01
[0x02,0x03] => 0x02
[0x04,0x07] => 0x03
[0x08,0x0F] => 0x04
[0x10,0x1F] => 0x05
[0x20,0x3F] => 0x06
[0x40,0x7F] => 0x07
[0x80,0xFF] => 0x08
TestCase in C:
#include <stdio.h>
unsigned char check(unsigned char b) {
unsigned char c = 0x08;
unsigned char m = 0x80;
do {
if(m&b) { return c; }
else { c -= 0x01; }
} while(m>>=1);
return 0; //never reached
}
int main() {
unsigned char input[256] = {
0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x0f,
0x10,0x11,0x12,0x13,0x14,0x15,0x16,0x17,0x18,0x19,0x1a,0x1b,0x1c,0x1d,0x1e,0x1f,
0x20,0x21,0x22,0x23,0x24,0x25,0x26,0x27,0x28,0x29,0x2a,0x2b,0x2c,0x2d,0x2e,0x2f,
0x30,0x31,0x32,0x33,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x3b,0x3c,0x3d,0x3e,0x3f,
0x40,0x41,0x42,0x43,0x44,0x45,0x46,0x47,0x48,0x49,0x4a,0x4b,0x4c,0x4d,0x4e,0x4f,
0x50,0x51,0x52,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x5b,0x5c,0x5d,0x5e,0x5f,
0x60,0x61,0x62,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x6b,0x6c,0x6d,0x6e,0x6f,
0x70,0x71,0x72,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x7b,0x7c,0x7d,0x7e,0x7f,
0x80,0x81,0x82,0x83,0x84,0x85,0x86,0x87,0x88,0x89,0x8a,0x8b,0x8c,0x8d,0x8e,0x8f,
0x90,0x91,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0x9b,0x9c,0x9d,0x9e,0x9f,
0xa0,0xa1,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xab,0xac,0xad,0xae,0xaf,
0xb0,0xb1,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xbb,0xbc,0xbd,0xbe,0xbf,
0xc0,0xc1,0xc2,0xc3,0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xcb,0xcc,0xcd,0xce,0xcf,
0xd0,0xd1,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xdb,0xdc,0xdd,0xde,0xdf,
0xe0,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xeb,0xec,0xed,0xee,0xef,
0xf0,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8,0xf9,0xfa,0xfb,0xfc,0xfd,0xfe,0xff };
unsigned char truth[256] = {
0x00,0x01,0x02,0x02,0x03,0x03,0x03,0x03,0x04,0x04,0x04,0x04,0x04,0x04,0x04,0x04,
0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,0x05,
0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,
0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,0x06,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,0x07,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,
0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08,0x08};
int i,r;
int f = 0;
for(i=0; i<256; ++i) {
r=check(input[i]);
if(r !=(truth[i])) {
printf("failed %d : 0x%x : %d\n",i,0x000000FF & ((int)input[i]),r);
f += 1;
}
}
if(!f) { printf("passed all\n"); }
else { printf("failed %d\n",f); }
return 0;
}
I would like to simplify my check() function to not involve looping (or branching preferably). Is there a bit twiddling hack or hashed lookup table solution to compute the position of the most significant set bit in a byte?
Your question is about an efficient way to compute log2 of a value. And because you seem to want a solution that is not limited to the C language I have been slightly lazy and tweaked some C# code I have.
You want to compute log2(x) + 1 and for x = 0 (where log2 is undefined) you define the result as 0 (e.g. you create a special case where log2(0) = -1).
static readonly Byte[] multiplyDeBruijnBitPosition = new Byte[] {
7, 2, 3, 4,
6, 1, 5, 0
};
public static Byte Log2Plus1(Byte value) {
if (value == 0)
return 0;
var roundedValue = value;
roundedValue |= (Byte) (roundedValue >> 1);
roundedValue |= (Byte) (roundedValue >> 2);
roundedValue |= (Byte) (roundedValue >> 4);
var log2 = multiplyDeBruijnBitPosition[((Byte) (roundedValue*0xE3)) >> 5];
return (Byte) (log2 + 1);
}
This bit twiddling hack is taken from Find the log base 2 of an N-bit integer in O(lg(N)) operations with multiply and lookup where you can see the equivalent C source code for 32 bit values. This code has been adapted to work on 8 bit values.
However, you may be able to use an operation that gives you the result using a very efficient built-in function (on many CPU's a single instruction like the Bit Scan Reverse is used). An answer to the question Bit twiddling: which bit is set? has some information about this. A quote from the answer provides one possible reason why there is low level support for solving this problem:
Things like this are the core of many O(1) algorithms such as kernel schedulers which need to find the first non-empty queue signified by an array of bits.
That was a fun little challenge. I don't know if this one is completely portable since I only have VC++ to test with, and I certainly can't say for sure if it's more efficient than other approaches. This version was coded with a loop but it can be unrolled without too much effort.
static unsigned char check(unsigned char b)
{
unsigned char r = 8;
unsigned char sub = 1;
unsigned char s = 7;
for (char i = 0; i < 8; i++)
{
sub = sub & ((( b & (1 << s)) >> s--) - 1);
r -= sub;
}
return r;
}
I'm sure everyone else has long since moved on to other topics but there was something in the back of my mind suggesting that there had to be a more efficient branch-less solution to this than just unrolling the loop in my other posted solution. A quick trip to my copy of Warren put me on the right track: Binary search.
Here's my solution based on that idea:
Pseudo-code:
// see if there's a bit set in the upper half
if ((b >> 4) != 0)
{
offset = 4;
b >>= 4;
}
else
offset = 0;
// see if there's a bit set in the upper half of what's left
if ((b & 0x0C) != 0)
{
offset += 2;
b >>= 2;
}
// see if there's a bit set in the upper half of what's left
if > ((b & 0x02) != 0)
{
offset++;
b >>= 1;
}
return b + offset;
Branch-less C++ implementation:
static unsigned char check(unsigned char b)
{
unsigned char adj = 4 & ((((unsigned char) - (b >> 4) >> 7) ^ 1) - 1);
unsigned char offset = adj;
b >>= adj;
adj = 2 & (((((unsigned char) - (b & 0x0C)) >> 7) ^ 1) - 1);
offset += adj;
b >>= adj;
adj = 1 & (((((unsigned char) - (b & 0x02)) >> 7) ^ 1) - 1);
return (b >> adj) + offset + adj;
}
Yes, I know that this is all academic :)
It is not possible in plain C. The best I would suggest is the following implementation of check. Despite quite "ugly" I think it runs faster than the ckeck version in the question.
int check(unsigned char b)
{
if(b&128) return 8;
if(b&64) return 7;
if(b&32) return 6;
if(b&16) return 5;
if(b&8) return 4;
if(b&4) return 3;
if(b&2) return 2;
if(b&1) return 1;
return 0;
}
Edit: I found a link to the actual code: http://www.hackersdelight.org/hdcodetxt/nlz.c.txt
The algorithm below is named nlz8 in that file. You can choose your favorite hack.
/*
From last comment of: http://stackoverflow.com/a/671826/315052
> Hacker's Delight explains how to correct for the error in 32-bit floats
> in 5-3 Counting Leading 0's. Here's their code, which uses an anonymous
> union to overlap asFloat and asInt: k = k & ~(k >> 1); asFloat =
> (float)k + 0.5f; n = 158 - (asInt >> 23); (and yes, this relies on
> implementation-defined behavior) - Derrick Coetzee Jan 3 '12 at 8:35
*/
unsigned char check (unsigned char b) {
union {
float asFloat;
int asInt;
} u;
unsigned k = b & ~(b >> 1);
u.asFloat = (float)k + 0.5f;
return 32 - (158 - (u.asInt >> 23));
}
Edit -- not exactly sure what the asker means by language independent, but below is the equivalent code in python.
import ctypes
class Anon(ctypes.Union):
_fields_ = [
("asFloat", ctypes.c_float),
("asInt", ctypes.c_int)
]
def check(b):
k = int(b) & ~(int(b) >> 1)
a = Anon(asFloat=(float(k) + float(0.5)))
return 32 - (158 - (a.asInt >> 23))

Is there any GMP logarithm function?

Is there any logarithm function implemented in the GMP library?
I know you didn't ask how to implement it, but...
You can implement a rough one using the properties of logarithms: http://gnumbers.blogspot.com.au/2011/10/logarithm-of-large-number-it-is-not.html
And the internals of the GMP library: https://gmplib.org/manual/Integer-Internals.html
(Edit: Basically you just use the most significant "digit" of the GMP representation since the base of the representation is huge B^N is much larger than B^{N-1})
Here is my implementation for Rationals.
double LogE(mpq_t m_op)
{
// log(a/b) = log(a) - log(b)
// And if a is represented in base B as:
// a = a_N B^N + a_{N-1} B^{N-1} + ... + a_0
// => log(a) \approx log(a_N B^N)
// = log(a_N) + N log(B)
// where B is the base; ie: ULONG_MAX
static double logB = log(ULONG_MAX);
// Undefined logs (should probably return NAN in second case?)
if (mpz_get_ui(mpq_numref(m_op)) == 0 || mpz_sgn(mpq_numref(m_op)) < 0)
return -INFINITY;
// Log of numerator
double lognum = log(mpq_numref(m_op)->_mp_d[abs(mpq_numref(m_op)->_mp_size) - 1]);
lognum += (abs(mpq_numref(m_op)->_mp_size)-1) * logB;
// Subtract log of denominator, if it exists
if (abs(mpq_denref(m_op)->_mp_size) > 0)
{
lognum -= log(mpq_denref(m_op)->_mp_d[abs(mpq_denref(m_op)->_mp_size)-1]);
lognum -= (abs(mpq_denref(m_op)->_mp_size)-1) * logB;
}
return lognum;
}
(Much later edit)
Coming back to this 5 years later, I just think it's cool that the core concept of log(a) = N log(B) + log(a_N) shows up even in native floating point implementations, here is the glibc one for ia64
And I used it again after encountering this question
No there is no such function in GMP.
Only in MPFR.
The method below makes use of mpz_get_d_2exp and was obtained from the gmp R package. It can be found under the function biginteger_log in the file bigintegerR.cc (You first have to download the source (i.e. the tar file)). You can also see it here: biginteger_log.
// Adapted for general use from the original biginteger_log
// xi = di * 2 ^ ex ==> log(xi) = log(di) + ex * log(2)
double biginteger_log_modified(mpz_t x) {
signed long int ex;
const double di = mpz_get_d_2exp(&ex, x);
return log(di) + log(2) * (double) ex;
}
Of course, the above method could be modified to return the log with any base using the properties of logarithm (e.g. the change of base formula).
Here it is:
https://github.com/linas/anant
Provides gnu mp real and complex logarithm, exp, sine, cosine, gamma, arctan, sqrt, polylogarithm Riemann and Hurwitz zeta, confluent hypergeometric, topologists sine, and more.
As other answers said, there is no logarithmic function in GMP. Part of answers provided implementations of logarithmic function, but with double precision only, not infinite precision.
I implemented full (arbitrary) precision logarithmic function below, even up to thousands bits of precision if you wish. Using mpf, generic floating point type of GMP.
My code uses Taylor serie for ln(1 + x) plus mpf_sqrt() (for boosting computation).
Code is in C++, and is quite large due to two facts. First is that it does precise time measurements to figure out best combinations of internal computational parameters for your machine. Second is that it uses extra speed improvements like extra usage of mpf_sqrt() for preparing initial value.
Algorithm of my code is following:
Factor out exponent of 2 from input x, i.e. rewrite x = d * 2^exp, with usage of mpf_get_d_2exp().
Make d (from step above) such that 2/3 <= d <= 4/3, this is achieved by possibly multiplying d by 2 and doing --exp. This ensures that d always differs from 1 by at most 1/3, in other words d extends from 1 in both directions (negative and positive) in equal distance.
Divide x by 2^exp, with usage of mpf_div_2exp() and mpf_mul_2exp().
Take square root of x several times (num_sqrt times) so that x becomes closer to 1. This ensures that Taylor Serie converges more rapidly. Because computation of square root several times is faster than contributing much more time in extra iterations of Taylor Serie.
Compute Taylor Serie for ln(1 + x) up to desired precision (even thousands of bit of precision if needed).
Because in Step 4. we took square root several times, now we need to multiply y (result of Taylor Serie) by 2^num_sqrt.
Finally because in Step 1. we factored out 2^exp, now we need to add ln(2) * exp to y. Here ln(2) is computed by just one recursive call to same function that implements whole algorithm.
Steps above come from sequence of formulas ln(x) = ln(d * 2^exp) = ln(d) + exp * ln(2) = ln(sqrt(...sqrt(d))) * num_sqrt + exp * ln(2).
My implementation automatically does timings (just once per program run) to figure out how many square roots is needed to balance out Taylor Serie computation. If you need to avoid timings then pass 3rd parameter sqrt_range to mpf_ln() equal to 0.001 instead of zero.
main() function contains examples of usage, testing of correctness (by comparing to lower precision std::log()), timings and output of different verbose information. Function is tested on first 1024 bits of Pi number.
Before call to my function mpf_ln() don't forget to setup needed precision of computation by calling mpf_set_default_prec(bits) with desired precision in bits.
Computational time of my mpf_ln() is about 40-90 micro-seconds for 1024 bit precision. Bigger precision will take more time, that is approximately linearly proportional to the amount of precision bits.
Very first run of a function takes considerably longer time becuse it does pre-computation of timings table and value of ln(2). So it is suggested to do first single computation at program start to avoid longer computation inside time critical region later in code.
To compile for example on Linux, you have to install GMP library and issue command:
clang++-14 -std=c++20 -O3 -lgmp -lgmpxx -o main main.cpp && ./main
Try it online!
#include <cstdint>
#include <iomanip>
#include <iostream>
#include <cmath>
#include <chrono>
#include <mutex>
#include <vector>
#include <unordered_map>
#include <gmpxx.h>
double Time() {
static auto const gtb = std::chrono::high_resolution_clock::now();
return std::chrono::duration_cast<std::chrono::duration<double>>(
std::chrono::high_resolution_clock::now() - gtb).count();
}
mpf_class mpf_ln(mpf_class x, bool verbose = false, double sqrt_range = 0) {
auto total_time = verbose ? Time() : 0.0;
int const prec = mpf_get_prec(x.get_mpf_t());
if (sqrt_range == 0) {
static std::mutex mux;
std::lock_guard<std::mutex> lock(mux);
static std::vector<std::pair<size_t, double>> ranges;
if (ranges.empty())
mpf_ln(3.14, false, 0.01);
while (ranges.empty() || ranges.back().first < prec) {
size_t const bits = ranges.empty() ? 64 : ranges.back().first * 3 / 2;
mpf_class x = 3.14;
mpf_set_prec(x.get_mpf_t(), bits);
double sr = 0.35, sr_best = 1, time_best = 1000;
size_t constexpr ntests = 5;
while (true) {
auto tim = Time();
for (size_t i = 0; i < ntests; ++i)
mpf_ln(x, false, sr);
tim = (Time() - tim) / ntests;
bool updated = false;
if (tim < time_best) {
sr_best = sr;
time_best = tim;
updated = true;
}
sr /= 1.5;
if (sr <= 1e-8) {
ranges.push_back(std::make_pair(bits, sr_best));
break;
}
}
}
sqrt_range = std::lower_bound(ranges.begin(), ranges.end(), size_t(prec),
[](auto const & a, auto const & b){
return a.first < b;
})->second;
}
signed long int exp = 0;
// https://gmplib.org/manual/Converting-Floats
double d = mpf_get_d_2exp(&exp, x.get_mpf_t());
if (d < 2.0 / 3) {
d *= 2;
--exp;
}
mpf_class t;
// https://gmplib.org/manual/Float-Arithmetic
if (exp >= 0)
mpf_div_2exp(x.get_mpf_t(), x.get_mpf_t(), exp);
else
mpf_mul_2exp(x.get_mpf_t(), x.get_mpf_t(), -exp);
auto sqrt_time = verbose ? Time() : 0.0;
// Multiple Sqrt of x
int num_sqrt = 0;
if (x >= 1)
while (x >= 1.0 + sqrt_range) {
// https://gmplib.org/manual/Float-Arithmetic
mpf_sqrt(x.get_mpf_t(), x.get_mpf_t());
++num_sqrt;
}
else
while (x <= 1.0 - sqrt_range) {
mpf_sqrt(x.get_mpf_t(), x.get_mpf_t());
++num_sqrt;
}
if (verbose)
sqrt_time = Time() - sqrt_time;
static mpf_class const eps = [&]{
mpf_class eps = 1;
mpf_div_2exp(eps.get_mpf_t(), eps.get_mpf_t(), prec + 8);
return eps;
}(), meps = -eps;
// Taylor Serie for ln(1 + x)
// https://math.stackexchange.com/a/878376/826258
x -= 1;
mpf_class k = x, y = x, mx = -x;
size_t num_iters = 0;
for (int32_t i = 2;; ++i) {
k *= mx;
y += k / i;
// Check if error is small enough
if (meps <= k && k <= eps) {
num_iters = i;
break;
}
}
auto VerboseInfo = [&]{
if (!verbose)
return;
total_time = Time() - total_time;
std::cout << std::fixed << "Sqrt range " << sqrt_range << ", num sqrts "
<< num_sqrt << ", sqrt time " << sqrt_time << " sec" << std::endl;
std::cout << "Ln number of iterations " << num_iters << ", ln time "
<< total_time << " sec" << std::endl;
};
// Correction due to multiple sqrt of x
y *= 1 << num_sqrt;
if (exp == 0) {
VerboseInfo();
return y;
}
mpf_class ln2;
{
static std::mutex mutex;
std::lock_guard<std::mutex> lock(mutex);
static std::unordered_map<size_t, mpf_class> ln2s;
auto it = ln2s.find(size_t(prec));
if (it == ln2s.end()) {
mpf_class sqrt_sqrt_2 = 2;
mpf_sqrt(sqrt_sqrt_2.get_mpf_t(), sqrt_sqrt_2.get_mpf_t());
mpf_sqrt(sqrt_sqrt_2.get_mpf_t(), sqrt_sqrt_2.get_mpf_t());
it = ln2s.insert(std::make_pair(size_t(prec), mpf_class(mpf_ln(sqrt_sqrt_2, false, sqrt_range) * 4))).first;
}
ln2 = it->second;
}
y += ln2 * exp;
VerboseInfo();
return y;
}
std::string mpf_str(mpf_class const & x) {
mp_exp_t exp;
auto s = x.get_str(exp);
return s.substr(0, exp) + "." + s.substr(exp);
}
int main() {
// https://gmplib.org/manual/Initializing-Floats
mpf_set_default_prec(1024); // bit-precision
// http://www.math.com/tables/constants/pi.htm
mpf_class x(
"3."
"1415926535 8979323846 2643383279 5028841971 6939937510 "
"5820974944 5923078164 0628620899 8628034825 3421170679 "
"8214808651 3282306647 0938446095 5058223172 5359408128 "
"4811174502 8410270193 8521105559 6446229489 5493038196 "
"4428810975 6659334461 2847564823 3786783165 2712019091 "
"4564856692 3460348610 4543266482 1339360726 0249141273 "
"7245870066 0631558817 4881520920 9628292540 9171536436 "
);
std::cout << std::boolalpha << std::fixed << std::setprecision(14);
std::cout << "x:" << std::endl << mpf_str(x) << std::endl;
auto cmath_val = std::log(mpf_get_d(x.get_mpf_t()));
std::cout << "cmath ln(x): " << std::endl << cmath_val << std::endl;
auto volatile tmp = mpf_ln(x); // Pre-Compute to heat-up timings table.
auto time_start = Time();
size_t constexpr ntests = 20;
for (size_t i = 0; i < ntests; ++i) {
auto volatile tmp = mpf_ln(x);
}
std::cout << "mpf ln(x) time " << (Time() - time_start) / ntests << " sec" << std::endl;
auto mpf_val = mpf_ln(x, true);
std::cout << "mpf ln(x):" << std::endl << mpf_str(mpf_val) << std::endl;
std::cout << "equal to cmath: " << (std::abs(mpf_get_d(mpf_val.get_mpf_t()) - cmath_val) <= 1e-14) << std::endl;
return 0;
}
Output:
x:
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481117450284102701938521105559644622948954930381964428810975665933446128475648233786783165271201909145648566923460348610454326648213393607260249141273724587007
cmath ln(x):
1.14472988584940
mpf ln(x) time 0.00004426845000 sec
Sqrt range 0.00000004747981, num sqrts 23, sqrt time 0.00001440000000 sec
Ln number of iterations 42, ln time 0.00003873100000 sec
mpf ln(x):
1.144729885849400174143427351353058711647294812915311571513623071472137769884826079783623270275489707702009812228697989159048205527923456587279081078810286825276393914266345902902484773358869937789203119630824756794011916028217227379888126563178049823697313310695003600064405487263880223270096433504959511813198
equal to cmath: true