how cpu calculates system time even after shutting down? - system

When we switch off our pc, even after that when we switch it on again then it shows correct system time. How that happens without power?

The CMOS battery will keep the clock running.

There is a battery inside of your motherboard to maintain correct time and some settings.
You can google images about it

Related

NVIDIA GPU slows down unexpectidely

We have an application taking data from a camera and processing it in real time through the gpu to render a scene. The gpu is an Nvidia RTX 3000 in a Lenovo laptop T15.
Our application is started and all goes well for a first session of rendering, FPS is tracking at 30, gpu power is at 70W, cpu is around 50%. One application session lasts for a few minutes and results are fully rendered in real time.
Then we initiate a second application session and the FPS plummets and rendering lags which is unacceptable for our application (in the health space). Power drops to 30W and the gpu appears to not be loaded at all neither is the cpu.
We tried moving part of the initialization code to when we detect the beginning of the session, it works better and let us handle a few sessions in a row, but eventually the behavior reappears after a while.
Temperature does not seem to be involved here so we do not think the gpu gets throttled for excess temperature.
any suggestion on where to look for and how to get a more deterministic behavior ? Anything specific we could reset between sessions?
thanks a lot for any help.

How does a CPU idle (or run below 100%)?

I first learned about how computers work in terms of a primitive single stored program machine.
Now I'm learning about multitasking operating systems, scheduling, context switching, etc. I think I have a fairly good grasp of it all, except for one thing. I have always thought of a CPU as something which is just charging forward non-stop. It always knows where to go next (program counter), and it goes to that instruction, etc, ad infinitum.
Clearly this is not the case since my desktop computer CPU is not always running at 100%. So how does the CPU shut itself off or throttle itself down, and what role does the OS play in this? I'm guessing there's an input on the CPU somewhere which allows it to power down... and the OS can set this if it has nothing to schedule, but the next logical question is how does it start back up again? I'm guessing either one of two things:
It never shuts down completely, just runs at a very low frequency waiting for the scheduler to get busy again
It shuts down completely but is woken up by interrupts
I searched all over for info on this and came up fairly empty-handed. Any insight would be much appreciated.
The answer is that is depends on the hardware, the operating system and the way that the operating system has been configured.
And it could involve either or both of the strategies you proposed.
Another possibility for machines based on the x86 architecture, is that x86 has an HLT instruction that causes the core to stop until it receives an external interrupt. So the "Idle" task could simply execute HLT in a tight loop.
Just go to task manager, performance tab, and watch the cpu usage while you're doing absolutely nothing on your computer. it never stops fluctuating. Having an operating system like windows running, the cpu is going to ALWAYS be functioning, it never completely shuts down.
Having your monitor display an image requires your cpu to process a function allowing it to display anything. etc.
Everything runs through the CPU, just like your brain, it controls everything. nothing would function without it.
Some CPUs do have a 'wait for interrupt' instruction which allows the CPU to stop executing instructions when there is nothing to do, and will not re-awake until there is an interrupt event. This is particularly useful in microcontrollers, where they can sit for long periods of time waiting for something to happen.
Intel = HLT (Halt)
ARM = WFI (Wait for interrupt)
Sometimes a 'busy wait' is also used, where the CPU sits in a little 'idle' loop, checking for things to do. In this case, the CPU is still running instructions, but the operating system is in an idle state. It's not as efficient as using a HLT.
Modern CPUs can also adjust their power usage, and are capable of reducing clock rates, or shutting down parts of the CPU that aren't being used. In this way, power usage during an active idle state can be less than during active processing, even though the core CPU is still running and executing instructions.
If speaking about x86 architecture when an operating system has nothing to do it can use HLT instruction.
HLT instruction stops the CPU till next interrupt.
See http://en.m.wikipedia.org/wiki/HLT for details.
Other architectures have similar instruction to give CPU a rest.

Mac app costs so much CPU after sleep

My app runs well at launch. Normally, it only use 0.2% CPU while running.
But keep using the app day after day, it now costs 15% CPU, which is really huge for me.
I think thing goes wrong, after I sleep my Macbook many times. I don't turn off my Macbook.
I don't know where to investigate this bug?
PS: my app uses many NSTimer, which is added to NSRunLoopCommonModes
Thanks,
The only real answer is: Profile and see where the time is being used.
In sleep mode, the operating system and other programs generally don't do much or anything. If your app continues to loop and ignores sleep mode, your percentual CPU usage will go up because other programs use less.
Ideally your app should check for sleep mode and then adjust its behaviour, e.g. suspend the loop.

How to receive TCP data while the iPod is in hibernation mode?

I want to constantly send data to the iPod. But how can I do this when the iPod is in hibernation mode? If I disable the hibernation mode, then the batteries will be used up in the matter of hours.
??
In short, you can't. The whole point of hibernation mode is to totally limit battery consumption in all ways and that includes turning off the WiFi radio.
What you can do is minimize power consumption; turn off the screen, use minimal CPU, etc...
However, I'm not versed in the ways of the APIs to know specifically what is possible.

how does the function time() tell the current time and even when the computer has been powered off earlier?

how we can work on timer deals with milliseconds (0.001) how we could divide second as we want ?? how we could we deal with the second itself ???
http://computer.howstuffworks.com/question319.htm
In your computer (as well as other
gadgets), the battery powers a chip
called the Real Time Clock (RTC) chip.
The RTC is essentially a quartz watch
that runs all the time, whether or not
the computer has power. The battery
powers this clock. When the computer
boots up, part of the process is to
query the RTC to get the correct time
and date. A little quartz clock like
this might run for five to seven years
off of a small battery. Then it is
time to replace the battery.
Your PC will have a hardware clock, powered by a battery so that it keeps ticking even while the computer is switched off. The PC knows how fast its clock runs, so it can determine when a second goes by.
Initially, the PC doesn't know what time it is (i.e. it just starts counting from zero), so it must be told what the current time is - this can be set in the BIOS settings and is stored in the CMOS, or can be obtained via the Internet (e.g. by synchronizing with the clocks at NIST).
Some recap, and some more info:
1) The computer reads the Real-Time-Clock during boot-up, and uses that to set it's internal clock
2) From then on, the computer uses it's CPU clock only - it does not re-read the RTC (normally).
3) The computer's internal clock is subject to drift - due to thermal instability, power fluctuations, inaccuracies in finding an exact divisor for seconds, interrupt latency, cosmic rays, and the phase of the moon.
4) The magnitude of the clock drift could be in the order of seconds per day (tens or hundreds of seconds per month).
5) Most computers are capable of connecting to a time server (over the internet) to periodically reset their clock.
6) Using a time server can increase the accuracy to within tens of milliseconds (normally). My computer updates every 15 minutes.
Computers know the time because, like you, they have a digital watch they look at from time to time.
When you get a new computer or move to a new country you can set that watch, or your computer can ask the internet what the time is, which helps to stop it form running slow, or fast.
As a user of the computer, you can ask the current time, or you can ask the computer to act as an alarm clock. Some computers can even turn themselves on at a particular time, to back themselves up, or wake you up with a favourite tune.
Internally, the computer is able to tell the time in milliseconds, microseconds or sometimes even nanoseconds. However, this is not entirely accurate, and two computers next to each other would have different ideas about the time in nanoseconds. But it can still be useful.
The computer can set an alarm for a few milliseconds in the future, and commonly does this so it knows when to stop thinking about your e-mail program and spend some time thinking about your web browser. Then it sets another alarm so it knows to go back to your e-mail a few milliseconds later.
As a programmer you can use this facility too, for example you could set a time limit on a level in a game, using a 'timer'. Or you could use a timer to tell when you should put the next frame of the animation on the display - perhaps 25 time a second (ie every 40 milliseconds).
To answer the main question, the BIOS clock has a battery on your motherboard, like Jian's answer says. That keeps time when the machine is off.
To answer what I think your second question is, you can get the second from the millisecond value by doing an integer division by 1000, like so:
second = (int) (milliseconds / 1000);
If you're asking how we're able to get the time with that accuracy, look at Esteban's answer... the quartz crystal vibrates at a certain time period, say 0.00001 seconds. We just make a circuit that counts the vibrations. When we have reached 100000 vibrations, we declare that a second has passed and update the clock.
We can get any accuracy by counting the vibrations this way... any accuracy thats greater than the period of vibration of the crystal we're using.
The motherboard has a clock that ticks. Every tick represents a unit of time.
To be more precise, the clock is usually a quartz crystal that oscilates at a given frequency; some common CPU clock frequencies are 33.33 and 40 MHz.
Absolute time is archaically measured using a 32-bit counter of seconds from 1970. This can cause the "2038 problem," where it simply overflows. Hence the 64-bit time APIs used on modern Windows and Unix platforms (this includes BSD-based MacOS).
Quite often a PC user is interested in time intervals rather than the absolute time since a profound event took place. A common implementation of a computer has things called timers that allow just that to happen. These timers might even run when the PC isn't with purpose of polling hardware for wake-up status, switching sleep modes or coming out of sleep. Intel's processor docs go into incredible detail about these.