I have just gotten into programming and i realized for every variable that you identify, you use ur computer memory as it is saved in it somewhere.
I wanted to know if I run a piece of code multiple times, would I lose more memory or somehow once you close terminal or program, system deletes it automatically.
THANK YOU
I've run a code several times and every time the address that a same variable is saved in is different.
I believe I'm wasting my computers memory so if I am how do I delete said variables from memory?
Yes, for all intents and purposes, it is gone the second the program has finished executing.
There are times that this isn't true, but they almost certainly don't apply to you. When on a normal computer or OS running device, the OS (operating system) will clean-up any resources used by your code when it is finished running. This includes all the memory used by declared variables (which is tiny amounts anyway, normally), files you have opened and forgotten to close, and pretty much everything else. OSs are very resilient!
I've run a code several times and every time the address that a same variable is saved in is different. I believe I'm wasting my computers memory so if I am how do I delete said variables from memory?
These are some pretty good investigative skills (a good sign for someone new to programming), but there are different reasons for this, don't worry. Memory addresses are a complex topic that is worth a look at later down the line, but the simplified story is that memory addresses are different every time you run the program for both security and performance reasons.
Related
I have a system that runs windows via a USB stick (it's a proprietary machine). This type of machine is commonly powered off by 'pulling the plug'. There is no way around it, that is how it is operated.
We occasionally have drive corruption on the USB stick, or at least corruption in the directory that we write things into. Is there really any software solution to get around this problem other than 'write as little/infrequently as possible'?
It's a windows machine and the applications that write are typically written in Java/C# if that is useful to anyone. The corruption typically shows up as a write directory or the parent of a write directory that can no longer be accessed due to the corruption. The only way to deal with it is to delete it via command line and start over.
Is there any way to programmatically deal with such a scenario, to perhaps restore a previous state of the memory as opposed to deleting and starting anew?
I don't feel as though there is any way to prevent this type of thing from happening given our current design. If you do enough writes and keep pulling the plug you are eventually going to get a corruption and that's just facts. Especially in this design. Even if the backup batteries are charged, if the software doesn't shutdown gracefully within the battery's discharge time, the corruptions could still occur. Not to mention as gravitymixes said above its going to damage hardware eventually which we have seen before.
A system redesign needs to considered for this project as a whole. Some type of networked solution comes to mind immediately where data is sent off the volatile machine to be logged on a machine with a more reliable power source over a reliable network connection with writing to the disk on the actual volatile machine as a last ditch effort if network comms are not reliable at a given point in time (backfill). I feel like this would increase hardware life as well. Of course the problem of network reliability then becomes your problem.
Context:
I don't really understand how the kernel saves the state of a running code when it gets to exceed its time slice.
I don't visualize what happens actually.
Question:
1) Where is stored the current running code (and its stack ?) ?
2) When the kernel will "see" the code again, will it just follow an offset and keep going as if nothing happened ?
It is not clear to me.
Thanks
Current code instruction pointer and current stack pointer are stored in task_struct->ip and task_struct->sp (for x86) and new process's task_struct->ip and task_struct->sp and are loaded back to sp and ip registers when switch_to() is called in Linux kernel.
Kernel's switch_to() does many things like resetup of EIP, stack, FPU, segment descriptors, debug registers while switching to new process.
Then kernel's switch_mm() switch the virtual memory mappings from last process to new process.
It depends on the OS but as a general rule there is a block of storage which holds information about each process (usually called the Process Control Block or PCB). This information includes a pointer to the current line of code that is being executed and the contents of registers etc, so the process can start again where it stopped last time.
This block of information is owned by the OS itself not the process so it lives beyond the suspension of the process.
The program code itself is not stored in the PCB - it simply exists in memory or on disk. It can even be shared between processes, for example several processes may be running the same program, each at a different point in the code at any given time and each with their own set of 'variables' or data unique to that process's run of the program. All the OS needs is the variables and the line number or pointer to know where a particular process was in the code when it was suspended, and it can start from that point again.
It is worth noting that any RAM the process was using may or may not be still there when it restarts. In general an OS will try to leave recently used or frequently used RAM chunks (or 'pages') in memory if possible. If it needs to free up space, however, it may swap the 'page' out to disk, but disk access is much, much slower, hence the desire to avoid swapping out memory which is likely to be used again if possible.
In the worst case situation an OS may find it swaps out a process and then very soon the new process need to use some memory which has to be retrieved from disk. It is suspended while this happens as the retrieval take a long time in CPU terms. It may then happen that the next process also very soon finds itself in the same situation. The OS is now spending a lot of its time swapping processes and memory in and out and much less of its time doing real work - this is commonly called 'thrashing'.
For purpose of an academic project I want to create a simple program to hang a computer by consuming all available RAM. I tried creating a string and increase its length using a while loop like string=string+string2. I takes so much time when string2 is short, even though I could use 100% ram but at last I get a OutOfMemoryException and so much memory is freed. Is there any effective way of doing this?
Not entirely possible as the OS will deal with the memory management thus you wont be able to do what you are trying to do especially in managed code.
I also don't believe that an academic programme will have a course or instruct students to develop something like this. question sounds a little wierd
I'm new to programming, taking MIT's 6.00. While watching the Dynamic Programming lecture a simple question occurred to me: Is there any kind of built-in feature (for computers in general) to detect repetitive tasks and compensate?
I realize that's quite vague. I was working on my grandfather's computer because he had been complaining that it was slow. Indeed, it would lag for up to 15 seconds at a time, waiting for programs to open, etc. When I upgraded the RAM, the problem was gone. So if the computer was constantly having to write page ins and page outs to disk, why couldn't it have just popped up a little message suggesting a RAM upgrade? That would save quite a bit of time.
Computers are good at performing tasks quickly but slow code can be, well, slow. Can that be automated? Is this even a legitimate question?
In the example you describe the code isn't slow because it's reading/writing to disk. It's slow because it isn't actually doing anything but instead is waiting for the OS to page in and out to disk.
Also, a RAM upgrade isn't always the solution to frequent paging (say buggy program leaking memory or something).
It's not really possible in the general sense for the OS to detect what all the possible issues are and suggest a solution. That is in fact a variation of the Halting Problem.
It's impossible in general for a computer to know whether a slowness was because it's running an operation that fundamentally takes a long time to finish, or whether it's taking more time than it should really be.
Also, even if you've identified that an operation is slow, it's even more difficult to diagnose the precise reason why it is slow. Sometimes it's because you need more RAM, other times because slow network, or slow disk, or slow CPU. This is even more harder if the checker is running inside the same machine that it is running on since it's also experiencing the slowness itself.
However there are several things that can be done under certain limited situations. Many popular OSes (e.g. Windows, Linux, Android) can detect slow response to user input, and will offer to either give more time or force close applications (Android) or draw the not responding window in grayscale (Linux), or in bluish tint (Windows), if the application fails to respond to user input within certain period of time.
I was debugging a problem mentioned in a few other* questions on SO and noticed a strange behavior during the debugging process.
The behavior:
Experienced 'out of memory' error while pasting complex formulas. Only about half of the 20,000 rows I'm iterating get formulas pasted before the error.
Commented out virtually all code, error goes away.
Uncomment code incrementally in the hopes of discovering the specific section of code that's causing it.
End up uncommenting all code and stop experiencing the bug!
This means the exact same code worked fine in the same Excel instance, and fixing it only required running various lighter versions of the code before going back to the original version. What could possibly cause this?
Assuming the data you were running on was exactly identical every time, it sounds more like your problem was with the environment - the problem might be that the operating system ran out of memory. In Excel 2007, usable memory for formulas and pivot caches was increased to 2 gigabytes (GB), so that is probably not the issue. However it is of course also limited by how much memory your operating system had available at that time.
The problem may have occurred because when you first tested it, your available operating system memory was lower (from other processes running... could even have been pushed over the limit by background programs such as Antivirus software running a scan) than when you ran the full macro later. I would try running your macro with the Task Manager open to see if you are getting anywhere close to low on physical memory. Also, (assuming you are on Excel 2007 or later) look at how much memory Excel is using and see if you are getting anywhere close to the 2GB limit. I doubt that this would be the issue, but it's at least worth double checking. Also, like Zairja said, make sure you're setting the calculation to manual at the beginning.
You said that you were using complex formulas... check out this article on Improving Performance in Excel
There is a lot of useful information in the article that will probably help you streamline your macro.
Is this helpful to you?