Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Is there a free, fast way to recover the contents of an sd card? I have an sd card that originally belonged to a camera, with videos and photos on it. Recently, the camera no longer can take pictures, and the sd card looks like this on a computer.
The files and folders are replaced with random symbols, and are impossible to open.
I've tried multiple computers, and a software that didn't let me recover the images without paying. Is there a way to do it?
I recommend using a data ripper - it looks up file magic bytes, and copies the whole files into separate location. If your camera has corrupted the SD card when plugged out and in, I suppose you'll be able to recover min. 70% of data, but don't assume this, as it is computing black magic. There are many free data rippers.
Anyways, this question belongs to your camera or SD card vendor's support tracker.
Piriform Recuva is a free option:
https://www.piriform.com/recuva
The following was copied from the Piriform website Recuva documentation at the time of this writing (source):
What it can and cant do
Recuva can:
Scan through your hard drives, memory cards, and USB sticks to find files and folders you've deleted.
Tell you in advance how likely it is that your file(s) can be recovered.
Recover files that Windows can't (see Problems with Windows and file deletion)
Securely delete a file you may have previously deleted.
Recover emails you deleted 'permanently' from Microsoft Outlook Express, Mozilla Thunderbird, or Windows Live Mail.
Recover files from your iPod, iPod Nano, or iPod Shuffle (iPod Touch and iPhone not supported at this time). Recuva will even recover songs with Apple's FairPlay DRM.
Recover Canon RAW (.CRW) format image files.
Recover files from NTFS, FAT, and exFAT-formatted drives.
Bring your files back!
Recuva cannot:
Recover all files. Yes, as great as Recuva is it won't work all the time. Sometimes Windows has overwritten the area where the file used to be, or sometimes the file is too corrupted to recover.
Recover files you've deleted securely. For example, if you've used our CCleaner software to delete files using the Secure option, they're gone for good.
Securely delete certain very small files that are held in the Master File Table (MFT) and files of zero-byte length.
Recuva is capable of recovering from NAS devices, however the drive needs to be connected directly to the machine via USB/IDE/SATA. Recuva is not capable of recovering data over a network.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
From the available information I understood that setting disk cache size in selenium will help in faster loading of the web pages, when we are doing the scraping or anything on single website. But my question is what good will it do if we set the disk cache size while dealing with multiple websites?
Or is it in fact bad to set disk cache size? When scraping multiple web pages i.e. in a way the websites can trace that we are scraping?
Disk Cache is a cache memory that is used to speed up the process of storing and accessing data from the host machine hard disk. It enables faster processing during reading/writing, issuing commands and other I/O process between the hard disk, the memory and computing components. A disk cache is also referred to as a disk buffer or cache buffer.
Chromium disk cache
The disk cache stores resources fetched from the web so that they can be accessed quickly at a latter time if needed. The main characteristics are:
The cache should not grow unbounded so there must be an algorithm for deciding when to remove old entries.
While it is not critical to lose some data from the cache, having to discard the whole cache should be minimized. The current design should be able to gracefully handle application crashes, no matter what is going on at that time, only discarding the resources that were open at that time. However, if the whole computer crashes while we are updating the cache, everything on the cache probably will be discarded.
Access to previously stored data should be reasonably efficient, and it should be possible to use synchronous or asynchronous operations.
We should be able to avoid conflicts that prevent us from storing two given resources simultaneously. In other words, the design should avoid cache trashing.
It should be possible to remove a given entry from the cache, and keep working with a given entry while at the same time making it inaccessible to other requests (as if it was never stored).
The cache should not be using explicit multithread synchronization because it will always be called from the same thread. However, callbacks should avoid reentrancy problems so they must be issued through the thread's message loop.
Conclusion
To conclude, by default google-chrome will be configured with the default value for the diskcache which users can configure as per their respective usecases.
Changing Chrome Cache size on Windows 10
There is only one method that can be used to set and limit Google Chrome’s cache size.
Launch Google Chrome.
Right-click on the icon for Google Chrome on the taskbar and again right-click on the entry labeled as Google Chrome.
Now click on Properties. It will open the Google Chrome Properties window.
Navigate to the tab labeled as Shortcut.
In the field called Target, type in the following after the whole address:
-disk-cache-size-<size in bytes>
As an example, to configure it as -disk-cache-size-2147483648:
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" -disk-cache-size-2147483648
Here 2147483648 is the size of the cache in bytes which is equal to 2 Gigabytes.
Click on Apply and then click on OK for the limit to be set.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The way I understand the notion of a 'process' is that it is a running instance of an executable program.The exe is in the secondary memory and the running instance of it is in the RAM. If this understanding is right, I would like to know what is really meant by this abstract description: 'Dividing a process into 'pages' and running some of the pages in RAM and keeping the rest in secondary memory for swapping when needed'? The question here is in the context of virtual memory.
Adding a 'programming' context to the question, following suggestions from moderators:
Say I write a small program to list the numbers from 1 to 100 (or) to print 'Hello world' (or) some desktop utility to scan a text file and print the words in the file one by one within the desktop window. Considering the final executable I have, once these programs are compiled and linked, how can the executable be 'divided' and run in parts in RAM when I run the executable? How shall I grasp and visualise the idea of what 'should be' in RAM at a point in time and what 'should not'?
You have it (the division) right there, in the virtual to physical address translation. The virtual address space is split into blocks of one or several kilobytes (typically, all of the same size), each of which can be associated with a chunk (page) of physical memory of the same size.
Those parts of the executable (or process) that haven't been used yet or haven't been used recently need not to be copied into the physical memory from the disk and so the respective portions of the virtual address space may not be associated with physical memory either. When the system becomes low on free physical memory, it may repurpose some pages, saving their contents to the disk if necessary (or not saving, if they contain read-only data/code).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I make extensive use of source control for anything that relates to a project I'm working on (source, docs etc.) and I've never lost anything that way.
However, I have had two or three crashes (spread over the last 4 years) on my development machine that forced me to reinstall my system and reconfigure my apps (eclipse, vim, Firefox, etc.). For weeks after reinstalling, I was missing one little app or another, some PHP or Python module wasn't there, stuff like that.
While this is not fatal, it's very annoying and sucks up time. Because it seemed so rare, I didn't bother about an actual solution, but meanwhile I've developed a mindset where I just don't want stuff like that happening anymore.
So, what are good backup solutions for a development machine? I've read this very similar question, but that guy really wants something different than me.
What I want is to have spare harddrives on the shelf and reduce my recovery time after a crash to something like an hour or less.
Thinking about this, I figured there might also be a way to use the backup mechanism for keeping two or more dev workstations in sync, so I can continue work at a different PC anytime.
EDIT: I should've mentioned that
I'm running Linux
I want incremental backup, so that it's cheap to do it frequently (once or twice a day)
RAID is good, but I'm on a laptop most of the time, no second hd in there, no E-SATA and I'm not sure about RAIDing to a USB drive: would that actually work?
I've seen sysadmins use rsync, has anybody had any experiences with that?
I would set up the machine how you like it and then image it. Then, you can set up rsync(or even SVN) to backup your homedir nightly/etc.
Then when your computer dies, you can reimage, and then redeploy your home dir.
The only problem would be upgraded/new software, but the only way to deal completely with that would be to do complete nightly backups of your drive(s).
Thanks, this sounds like a good suggestion. I think it should be possible to also update the image regularly (to get software updates / installs), but maybe not that often. E. g. I could boot the image in a VM and perform a global package update or something.
Hanno
You could create an image of your workstation after you've installed & configured everything. Then when your computer crashes, you can just restore the image.
A (big) downside to this, is that you won't have any updates or changes you've made since you created the image.
Cobian Backup is a reliable backup system for Windows that will perform scheduled backups to an external drive.
You could create a hard drive image. Restoring from a backup image restores everything to the exact state that it was at the time you took the image.
Or you could create an installer that installs just about everything needed.
Since you expressed interest in rsync, here's an article that covers how to make a bootable backup image via rsync for Debian Linux:
http://www.debian-administration.org/articles/575
Rsync is fast and easy for local and network syncing and is by nature incremental.
You can use RAID-1 for that. It’s the synchronize type, not the backup type.
I use RAID mirroring in conjunction with an external hard drive using Vista's system backup utility to backup the entire machine. That way I can easily fix a hard drive failure, but in the event my system becomes corrupted, I can restore from the E-SATA drive (which I only connect for backup).
Full disclosure: I've never had to restore the backup, so it's kind of like the airbag in your car; hopefully it works when you need it, but there's no way to be sure. Also, the backup process is manual (it can be automated) so I'm only as safe as the last backup.
You can use the linux "dd" command line utility to clone a hard drive.
Just boot from a linux cd, clone or restore your drive and reboot.
It works great for Windows/Mac drives too.
This will clone partition 1 of the first hard drive (/dev/sda) to partition 1 of the second drive (/dev/sdb)
dd if=/dev/sda1 of=/dev/sdb1
This will clone partition 1 of the first hard drive to a FILE on the second drive.
dd if=/dev/sda1 of=/media/drive2/backup/2009-02-25.iso
Simply swap the values for if= and of= to restore the drive.
If you boot from the Ubuntu live CD it will automount your USB drives making it easy to perform the backup/restore with external drive(s).
CAUTION: Verify the identity of your drives BEFORE running the above commands. It's easy to overwrite the wrong drive if you not careful.
Guess this is not exactly what you are looking for, but I just document all what I install and configure on a machine. Google Docs lets me do this from anywhere, and keeps the document intact when the machine crashes.
A good step by step document usually reduces the recovery time to one day or so
If you use a Mac, just plug in an external hard drive and Time Machine will do the rest, creating a complete image of your machine on the schedule you set. I restored from a Time Machine image when I swapped out the hard drive in my MacBook Pro and it worked like a charm.
One other option that a couple of guys use at my company is to have their development environment on a large Linux server. They just use their local machines to run an NX client to access the remote desktop (NX is much faster than VNC) - this has the advantages of fast performance, automatic backup of their files on the server, and the fact that they're developing on the same hardware that our customers use.
No matter what solution you use, it is always a good idea to have a secondary backup, too. Secondary backup should be off-site and include your essential work (source code, important docs). In case something happens to your main site (fire at the office, somebody breaks in and steals all your hardware, etc.), you would still be able to recover, eventually.
There are many online backup solutions. You could just get a remote storage at a reliable provider (e.g. Amazon S3) and sync your work on a daily basis. The solution depends on the type of access you can get, but rsync is probably the tool you would use for that.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
maybe my question would be lost in the forum, but has somebody work with RFID tags? I know I can read them, but can I write or modify the inside data? Does anyone know where can I find more about this?
RFID Standards:
125 Khz (low-frequency) tags are write-once/read-many, and usually only contain a small (permanent) unique identification number.
13.56 Mhz (high-frequency) tags are usually read/write, they can typically store about 1 to 2 kilbytes of data in addition to their preset (permanent) unique ID number.
860-960 Mhz (ultra-high-frequency) tags are typically read/write and can have much larger information storage capacity (I think that 64 KB is the highest currently available for passive tags) in addition to their preset (permanent) unique ID number.
More Information
Most read/write tags can be locked to prevent further writing to specific data-blocks in the tag's internal memory, while leaving other blocks unlocked. Different tag manufacturers make their tags differently, though.
Depending on your intended application, you might have to program your own microcontroller to interface with an embedded RFID read/write module using a manufacturer-specific protocol. That's certainly a lot cheaper than buying a complete RFID read/write unit, as they can cost several thousand dollars. With a custom solution, you can build you own unit that does specifically what you want for as little as $200.
Links
RFID Journal
RFID Toys (Book) Website
SkyTek - RFID reader manufacturing company (you can buy their products through third-party retailers & wholesalers like Mouser)
Trossen Robotics - You can buy RFID tags and readers (125 Khz & 13.56 Mhz) from here, among other things
I did some development with Mifare Classic (ISO 14443A) cards about 7-8 years ago. You can read and write to all sectors of the card, IIRC the only data you can't change is the serial number.
Back then we used a proprietary library from Philips Semiconductors. The command interface to the card was quite alike the ISO 7816-4 (used with standard Smart Cards).
I'd recomment that you look at the OpenPCD platform if you are into development.
This is also of interest regarding the cryptographic functions in some RFID cards.
Some RFID chips are read-write, the majority are read-only. You can find out if your chip is read-only by checking the datasheet.
It depends on the type of chip you are using, but nowerdays most chips you can write. It also depends on how much power you give your RFID device. To read you dont need allot of power and very little line of sight. To right you need them full insight and longer insight
RFID tag has more standards. I have developed the RFID tag on Mifare card (ISO 14443A,B) and ISO 15693. Both of them, you can read/write or modify the data in the block data of RFID tag.
We have recently started looking into RFID solutions at my work place and we found a cheap solution for testing purposes.
One of the units from here:
http://www.sdid.com/products.shtml
Plugs into any windows mobile device with an SD slot and allows reading / writing. There is also a development kit to get you on your way with your own apps.
Hope this helps
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I know the standard answer is No. However hear out the reasons for wanting it, and then we'll go for whether it is possible to achieve the same effect as ReadyBoost via either enabling (and installing) ReadyBoost or using third party software.
Reasons for using Widows Server 2008 as a development environment on a laptop:
64-Bit, so you get the full use of 4GB RAM.
SharePoint developer, so you can run SharePoint locally and debug successfully.
Hyper-V, so you get hardware virtualisation of test environments and the ability to demo full solutions stored in Hyper-V on the road
So all of that equals: Windows Server 2008 (64) on a laptop.
Now because we are running Hyper-V, we require a large volume of disk space. This means we are using 5,000 rpm 250GB HDD.
So we are on a laptop, we are not able to use solid state HDD, and we only have 4GB of RAM and the throughput of a laptop motherboard rather than a server one... all of which means we are not flying... this thing isn't a sluggard but it's not zippy either.
Windows Server 2008 is based on the same code base as Vista. Vista features ReadyBoost, which enables USB 2 flash devices to be used as a weak cache for system files, which visibly increases the performance of Vista. As the codebases are similar, it should be possible for ReadyBoost to work on WS2008, however Microsoft have not shipped or enabled ReadyBoost in WS2008.
Given that we are running WS2008 on a laptop as a development environment, how can we achieve the performance gains of ReadyBoost through the use of flash devices in Windows Server 2008?
For the answer to be accepted it must outline an end to end process for achieving the performance gain.
Answers of 'No' will not be accepted as I understand some third party tools achieve some of the functionality, but I haven't seen a full end-to-end description of how to get going with them.
With Virtual machines, the answer to "do you really need so much memory" is a resounding YES. Trying to run 4-6 virtual machines eacch configured with 512MB or more really stresses out the system.
The ability to use ANYTHING as additonal virtual memory is key.
Is everything that's installed
64bit?
Do you have hardware virtualization
capabilities and is it turned on in
the bios?
Have you enabled superfetch?
Turn of desktop experience.
And last but not least, have a look
at this article and see if it
gives you any pointers.
Too add: It doesn't look like there is a reasonable way of using ReadyBoost on WS2008
OK, so this isn't quite ReadyBoost but the end result should be quite similar. Here is a video on youtube you can follow on how to do this on Vista - WS2008 should be no different.
http://www.youtube.com/watch?v=A0bNFvCgQ9w
Also, you may want to upgrade the hard drive on your laptop:
Recommend ST9500420ASG 500GB 7200RPM 16MB SATA w/ G-Shock Sensor