For example, the pdf(https://datasheets.raspberrypi.com/rpi4/raspberry-pi-4-datasheet.pdf) has many bookmark.
I'd like to go to the specific page with a URL.
The #page=10 can be used, but it is not accurate.
Is it possible to go to the bookmark page with a URL?
Thanks a lot
There is no guarantee that any viewer will support bookmarks nor where they will open a page , since often it is the same as the page number so try but expect page 6 OR 7 depending on viewers page number interpretation.
https://datasheets.raspberrypi.com/rpi4/raspberry-pi-4-datasheet.pdf#nameddest=interfaces
However some viewers may on occasion do better if using other methods such as comment number or two commands combined such as scroll to.
The sample you nominated above does not seem to respond to named destinations as well as other pdfs can, but does at least open that interfaces page. so trying a more complex
https://datasheets.raspberrypi.com/rpi4/raspberry-pi-4-datasheet.pdf#nameddest=Electrical%20Specification
It seems not to work in my Edge that's just page 1) which is why the advice is keep PDF bookmark corners a single unique word, never write them as War and (another) Piece.
Pages are the most likely to work well, but not like a ToC in the frontispiece, unless you can fully control the viewers input/response.
Also beware
PDF view settings
Search result match: Open PDFs to last viewed location when you reopen files
Since by default that was on during my testing, my result was often skewed.
Edge and Safari are usually the worst for applying URL Bookmarks, if at all.
However an offline viewer such as the one I support will usually allow driving from bookmark to bookmark via a program (here using the Win11 Cmd line for proof of ability, but can be via DDE etc.)
You enquired can the internal links be used as named destinations like the TOC unfortunately the full list of destinations is as set in this list.
Introduction; goto:{6; XYZ; 12; 72.000; 733.890}
Features; goto:{7; XYZ; 12; 72.000; 733.890}
Hardware; goto:{7; XYZ; 12; 72.000; 711.439}
Interfaces; goto:{7; XYZ; 12; 72.000; 543.295}
Software; goto:{8; XYZ; 12; 72.000; 733.890}
Mechanical Specification; goto:{8; XYZ; 12; 72.000; 553.757}
... continues below
3 extras ADDED BY ME MANUALLY
Figure 1\: Mechanical Dimensions; goto:{8; XYZ; 12; 0.000; 481.852}
Table 2\: Absolute Maximum Ratings; goto:{9; XYZ; 12; 0.000; 789.684}
Table 3\: DC Characteristics; goto:{9; XYZ; 12; 0.000; 465.650}
... continued from above
Electrical Specification; goto:{8; XYZ; 12; 72.000; 196.355}
Power Requirements; goto:{10; XYZ; 12; 72.000; 733.890}
Peripherals; goto:{10; XYZ; 12; 72.000; 662.151}
GPIO Interface; goto:{10; XYZ; 12; 72.000; 625.595}
GPIO Pin Assignments; goto:{10; XYZ; 12; 72.000; 548.239}
GPIO Alternate Functions; goto:{11; XYZ; 12; 72.000; 733.890}
Display Parallel Interface (DPI); goto:{12; XYZ; 12; 72.000; 733.890}
SD/SDIO Interface; goto:{12; XYZ; 12; 72.000; 663.378}
Camera and Display Interfaces; goto:{12; XYZ; 12; 72.000; 585.863}
USB; goto:{12; XYZ; 12; 72.000; 494.400}
HDMI; goto:{12; XYZ; 12; 72.000; 416.487}
Audio and Composite (TV Out); goto:{12; XYZ; 12; 72.000; 338.574}
Temperature Range and Thermals; goto:{12; XYZ; 12; 72.000; 240.336}
Availability; goto:{13; XYZ; 12; 72.000; 733.890}
Support; goto:{13; XYZ; 12; 72.000; 671.773}
To convert the TOC to destinations is an editor function, rarely found, also you may see from my manual additions, I did not set the area well thus they need manual correction of those settings. Adding bookmarks for tables or figures is chiefly a GUI task.
I have listed all available bookmarks and shown the 3 extras I attempted for ToC entries whilst exporting, however they are not URL accessible until saved as outline entries, the best method using an external go to list is '#page=X #zoom=##% and #fit=something see https://pdfobject.com/pdf/pdf_open_parameters_acro8.pdf but they are not supported/available in all viewers (so I support SumatraPDF that uses command line -page # -zoom ### -scroll <x,y>
Related
I'm looking to create a ring oscillator in Verilog, using inverters and generate.
Here's what I've tried so far:
module ringOsc(outclk);
parameter SIZE = 8; // This needs to be an even number
output outclk;
wire [SIZE : 0] w;
genvar i;
generate
for (i=0; i<SIZE; i=i+1) begin : notGates
not notGate(w[i+1], w[i]);
end
not notGateFirst(w[0], w[SIZE]);
endgenerate
assign outclk = w[0];
endmodule
This will be loaded onto an FPGA and the frequency of oscillation will be measured (of course with more than 9 inverters). Is this correct or am I missing something? Any help would be appreciated.
For a ring oscillator you need to have a delay. The not gates you are using have no such delay in simulation as they are ideal models.
Simplest is to add a delay to the gates:
not #(5,5) notGate(w[i+1], w[i]);
not #(5,5) notGateFirst(w[0], w[i]);
Also it is good practice to have en enable: one of the gates is a NAND gate.
You also need to tell the tool not to optimise your ring oscillator away. For that you have to look at the synthesis tool of your FPGA, especially the constraints settings to prevent logic optimization. Defining the intermediate nets as 'keep' might work.
have you ever calculated the mips of lpc1788 board? Recently I've calculated a result via following code running in rom:
volatile uint32_t tick;
void SysTick_Handler()
{
tick++;
}
unsigned long loops_per_ms;
extern void __delay(int n);
int calculate_mips()
{
int prec = 8;
unsigned long ji;
unsigned long loop;
loops_per_ms = 1 << 12;
while (loops_per_ms) {
ji = tick;
while (ji == tick) ;
ji = tick;
__delay(loops_per_ms);
if (ji != tick)
break;
loops_per_ms <<= 1;
}
loops_per_ms >>= 1;
loop = loops_per_ms >> 1;
while (prec--) {
loops_per_ms |= loop;
ji = tick;
while (ji == tick) ;
ji = tick;
__delay(loops_per_ms);
if (ji != tick)
loops_per_ms &= ~loop;
loop >>= 1;
}
return loops_per_ms / 500;
}
delay.s:
PUBLIC __delay
SECTION .text:CODE:REORDER(2)
THUMB
__delay
subs r0, r0, #1
bhi __delay
mov pc, lr
END
With IAR ide, I got loops_per_ms is 39936 and mips will be 79M, whil with Keil, I got a loops_per_ms is 29952 which means the mips is 59M.
The MCU speed is set to 120MHz, by datasheet the MIPS should be 1.25x120=150M, I think code running in ROM slow down the mips.
any body has some comments or other result?
You cannot measure MIPS in that way. You have no control over how many instructions the compiler will use to implement a particular high-level code source, and it will vary with optimisation level.
The core will achieve 1.25 MIPS per MHz, but that may be reduced depending on a number of factors. For example on Cortex-M on-chip Flash and on-chip RAM use separate buses, so that optimal performance is achieved when data is in RAM and code is in flash. If an instruction in flash needs to fetch data from flash the throughput will be reduced because the instruction fetch and the data fetch must be sequential, whereas a data fetch from RAM can occur in parallel. If you ran the code from RAM you would really notice a slow down since all data and instruction fetches would be sequential. Most Cortex-M parts employ a flash accelerator of some sort to compensate for slower flash memory to achieve zero-wait code execution in most cases, though it is possible to write code perversely to defeat such benefit. Other causes of reduced MIPS is bus latency caused by DMA operations and peripheral wait states.
The simplest and most accurate method of measuring MIPS for your particular application (which for the reasons mentioned above may vary from the optimal) is to use a trace capable debugger, which will capture every instruction executed over a period.
I'd like to use GPIO to turn on an LED in a craneboard (ARM processor). I'm very new to embedded programming. But, I'm quite good at C. I referred in some websites and learnt about GPIO related commands. I wrote a code, but I'm not quite sure in how to integrate it to the u-boot coding of the craneboard. I don't know where to start. Kindly guide me.
#define LED1 (1 << 6)
int getPinState(int pinNumber);
int main(void)
{
GPIO0_IODIR |= LED1;
GPIO0_IOSET |= LED1;
while (1)
{
GPIO0_IOCLR |= LED1;
}
}
int getPinState(int pinNumber)
{
int pinBlockState = GPIO0_IOPIN;
int pinState = (pinBlockState & (1 << pinNumber)) ? 1 : 0;
return pinState;
}
First of all, learn common bit (also pin in your case) manipulation expressions that you will use A LOT in embedded programming:
/* Set bit to 1 */
GPIO0_IODIR |= LED1; //output
/* Clear bit (set to 0) */
GPIO0_IOSET &= ~LED1; //low
/* Toggle bit */
GPIO0_IOSET ^= LED1;
Your while() loop actually does nothing, except for the first iteration, because the same logical OR operations do no change bit state (see logical table of this op). Also you should add delay, because if pin toggles too fast, LED might look like off all the time. Simple solution would look like:
while(1)
{
GPIO0_IOSET ^= LED1;
sleep(1); //or replace with any other available delay command
}
I do not have source files of U-Boot for Craneboard, so cannot tell you the exact place where to put your code, but basically there are several options: 1) add it in main(), where U-Boot start, thus hanging it (but you still have LED blinking!). 2) implement separate command to switch LED on/off (see command.c and cmd_ prefixed files for examples) 3) Integrate it in serial loop, so pin could be switched while waiting user input 4) build it as an application over U-Boot.
Get used to a lot of reading and documentation, TRM is your friend here (sometimes the only one). Also there are some great guides for embedded starters, just google around. Few to mention:
http://www.microbuilder.eu/Tutorials/LPC2148/GPIO.aspx (basics with examples)
http://beagleboard.org/ (great resource for BeagleBoard, but much applies to CraneBoard as they share the same SoC, includes great community).
http://free-electrons.com/ (more towards embedded Linux and other advanced topics, but some basics can also be found)
http://processors.wiki.ti.com/index.php/CraneBoard (official CraneBoard wiki, probably know this, but just in case)
P.S. Good luck in and don't give up!
If you want to do it in u-boot (and not in Linux), then you have to write an application for u-boot.
$5.12 of u-boot manual explains how to do it.
The source of u-boot provides some examples that you can use.
I'd like to add my answer. The coding that I have done previously was a generalized one. In craneboard, there are certain functions, that does the operation. So, I rewrote it accordingly. I included the file cmd_toggle.c in 'common' directory in the u-boot directory. And added it to the Makefile. The following code will make the LED to blink.
int glow_led(cmd_tbl_t *cmdtp, int flag, int argc, char *argv[])
{
int ret,i=0,num=0,ctr=0;
int lpin;
lpin=(int)strtoul(argv[1]);
ret=set_mmc_mux();
if(ret<0)
printf("\n\nLED failed to glow!\n\n");
else{
if(!omap_request_gpio(lpin))
{
omap_set_gpio_direction(lpin,0);
for(i=1;i<21;i++)
{
ctr=0;
if((i%2)==0)
{
num=num-1;
omap_set_gpio_dataout(lpin,num);
}
else
{
num=num+1;
omap_set_gpio_dataout(lpin,num);
}
udelay(3000000);
}
}
}
return 0;
}
U_BOOT_CMD(toggle,2,1,glow_led,"Glow an LED","pin_number");
I could've made this a little simpler by just using a while loop to repeatedly set it as 1 and 0.
This can be executed from the u-boot console as toggle 142, as I have connected the LED to the pin 142.
P.S
Thanks for all your guidance. A special thanks to KBart
Does anyone know a good reference to help with understanding the relative cost of operations like copying variables, declaring new variables, FileIO, array operations, etc? I've been told to study decompilation and machine code but a quick reference would be nice. For example, something to tell me how much worse
for(int i = 0; i < 100; i++){
new double d = 7.65;
calc(d);
}
is than
double d = 7.65;
for(int i = 0; i < 100; i++){
calc(d);
}
Here is a nice paper by Felix von Leitner on the state of C compiler optimization. I learned of it on this Lambda the Ultimate page.
The performance of operations you mention such as file I/O, memory access, and computation are highly dependent on a computer's architecture. Much of the optimization of software for today's desktop computers is focused on cache memory.
You would gain much from an architecture book or course. Here's a good example from CMU.
Martinus gives a good example of a code where compiler optimizes the code at run-time by calculating out multiplication:
Martinus code
int x = 0;
for (int i = 0; i < 100 * 1000 * 1000 * 1000; ++i) {
x += x + x + x + x + x;
}
System.out.println(x);
His code after Constant Folding -compiler's optimization at compile-time (Thanks to Abelenky for pointing that out)
int x = 0;
for (int i = 0; i < 100000000000; ++i) {
x += x + x + x + x + x;
}
System.out.println(x);
This optimization technique seems to be a trivial in my opinion.
I guess that it may one of the techniques Sun started to keep trivial recently.
I am interested in two types of optimizations made by compilers:
optimizations which were omitted in today's compilers as trivial such as in Java's compiler at run-time
optimizations which are used by the majority of today's compilers
Please, put each optimization technique to a separate answer.
Which techniques have compilers used in 90s (1) and today (2)?
Just buy the latest edition of the Dragon Book.
How about loop unrolling?:
for (i = 0; i < 100; i++)
g ();
To:
for (i = 0; i < 100; i += 2)
{
g ();
g ();
}
From http://www.compileroptimizations.com/. They have many more - too many for an answer per technique.
Check out Trace Trees for a cool interpreter/just-in-time optimization.
The optimization shown in your example, of collapsing 100*1000*1000*1000 => 100000000000 is NOT a run-time optimization. It happens at compile-time. (and I wouldn't even call it an optimization)
I don't know of any optimizations that happen at run-time, unless you count VM engines that have JIT (just-in-time) compiling.
Optimizations that happen at compile-time are wide ranging, and frequently not simple to explain. But they can include in-lining small functions, re-arranging instructions for cache-locality, re-arranging instructions for better pipelining or hyperthreading, and many, many other techniques.
EDIT: Some F*ER edited my post... and then down-voted it. My original post clearly indicated that collapsing multiplication happens at COMPILE TIME, not RUN TIME, as the poster suggested. Then I mentioned I don't really consider collapsing constants to be much of an optimization. The pre-processor even does it.
Masi: if you want to answer the question, then answer the question. Do NOT edit other people's answers to put in words they never wrote.
Compiler books should provide a pretty good resource.
If this is obvious, please ignore it, but you're asking about low-level optimizations, the only ones that compilers can do. In non-toy programs, high-level optimizations are far more productive, but only the programmer can do them.