Is there a way to copy a list of select files from one drive, and sort them into an identical file structure on another? - batch-processing

I am a video editor, and I have access to one of my repeat clients' servers, which is mounted as a network drive. The connection is too slow, so I have to copy over any footage I want to use. I like to keep it in an identical file structure, to make re-linking the footage on their end easy, but recreating it manually is extremely tedious and time-consuming, as is individually copying each file..
My question is, is there a way to take a list of select files and batch copy them to my own drive while keeping them in the same file structure?
Essentially, I want the file paths to look exactly the same except for the drive letter.

Related

How to Create a Program Which Searches for Values from a .txt or any Text Document in Specific Folders

I am relatively new to programming and want to create a program which can solve a problem that I frequently have.
So here's the background to my short story: I was on a website which hosted many files (We're talking about around 500-1000 small files). I was then like," Oh sweet! I want to have all these things in my hard drive so I know that I have access to them... but am probably not going to use them either way". I proceeded to download all 500-1000 files on that site, but encountered a problem when I looked at the properties of my destination file. Let's say that out of 500 on the site, my computer only had 499 files. Just my luck. I wanted to know what was that one pesky file that slipped right by me and download that file specifically. What I didn't want to do was to delete all the files and then try my luck once more in downloading all the files from the website. On the site, there was no indication of what all files I downloaded, so I was completely in the blue. I could go in Ctrl+C each item, then Ctrl+V into the file manager search bar, but that would be tedious to repeat that 500 times.
Now, what I want to do: I wanted to go ahead and take all of the file names from the website (The file name that I downloaded and the file name that was in my drive are the same), put them all in a simple .txt document or something (The website has multiple unwanted text alongside the text I need, such as:
. If this is not possible to extract the text from the site like this, then I am ok with manually entering the names via copy paste). Then I want the computer to take these values in the document and then search for it in a specific folder path (Note: the actual files are in subfolders within the root folder I want to choose, so the program has to be able to search within multiple folders of the root). Then I want the computer to know if the value in the document, is present as a file. If the file doesn't exist, then I want that value/those values in the document to be displayed as the output. I want this cycle to repeat until all the values have been gone through. The output should list the values that were not present.
Conclusion: You probably now get at what I am trying to do, if you don't, tell me what I need to elaborate on. I really don't care how this program is made (what language or software), I just want something that works... but myself don't know how to create.
Thanks for reading and any response is appreciated!
Dhanwanth P :)
Here's a solution in Python in case you would like to explore...
Similar to what you described, all files from the website are listed in an Excel file 'website_files.xlsx'
And all files are saved in a folder 'downloaded_wav'. The script will work regardless the files are saved in the root directory or sub-folders.
Then I run below Python script to look for the missing file:
import pandas as pd
import os
path_folder = 'C:\\Users\\Admin\\Downloads\\downloaded_wav'
downloaded_files = []
d,m = 0,0
for path_name, subfolders, files in os.walk(path_folder): #include all subfolders
for file in files:
d+=1
downloaded_files.append(file)
df = pd.read_excel('website_files.xlsx')
for file in df.values:
if file not in downloaded_files:
print('MISSING', file)
m+=1
print(len(df), 'files on website')
print(d, 'files downloaded')
print(m, 'missing file(s) found')
Output:
MISSING ['OLIVER_snare_disco_mixready_hybrid.wav']
3 files on website
2 files downloaded
1 missing file(s) found
No worries; I found a solution by myself using Excel (God, it's powerful!).
Basically, I copied and pasted my values from the website, then used a filter to show the values only with .wav. Then I used a Power Query from the folder to get me a list of all names of files in a folder. Finally, I went ahead and compared the two using a formula:
=IF(COUNTIF(B:B,D,"OK","MISSING")
If you need more elaboration, I'd be happy to help, just reply to this. There might be an easier way, but I personally liked the straight-forwardness of this. You only need Microsoft excel!
EDIT:
For me, I used these two videos which go over the power query and countif function:
How to Get the List of File Names in a Folder in Excel (without VBA): https://www.youtube.com/watch?v=OSCPVBWOqwc
How to Compare Two Excel Sheets (and find the differences): https://www.youtube.com/watch?v=8Ou_wfzcKKk
In my case, I made my sheet look like this:

Automatically import new csv file data into a "Database" Excel workbook

My situation:
At a competition, we will have 6 "scorers" each using a separate android tablet. For every game (there will probably be 70 or 80 throughout the tournament), each person will score accordingly on a custom app that will create a .csv file. (To be clear, each match will result in 6 separate, 1 row, csv files.) The format of the data will be the same from game to game, and from scorer to scorer. I can have control over the names of these files such as "[Scorer#]_[Match###].csv". These tablets will all be connected to a central computer via USB.
What I would like to do:
I would like to be able to have the data from all of those files automatically populate a "database" table on a single sheet. If possible, I would like a folder to act as a "watch folder" of sorts, where, as a new file shows up in a folder, that data is automatically ingested into the table. If that is not possible, I would be happy with a single function I could run to check for new data after each game ended.
I had considered possibly trying to use power query, but wasn't sure if that could lead me to a usable solution.
Any suggestions would be greatly appreciated!
(and I apologize if anything is unclear. I'm happy to clear up any confusion)
Power Query is a good fit in that scenario. You can set up a query that loads all files in a specific folder and appends the contents. Refresh the query when new files have been added to the folder.
For detailed instructions how to set up such a query, take a look here:
http://excelunplugged.com/2015/02/10/get-data-from-folder-in-power-query/

VB.NET Create downloadable resource

I've become stuck at this hurdle. I'm trying to create a database that clients fill in, however the client can set different database paths to view different information in the program. I want to create template databases so should they wish to create a new database it will work with the SQL queries the program uses.
I'm trying to save the templates in to the program so that when a button is clicked, the template file is "downloaded" (copied) to the clients desktop.
Is this even possible?
Thanks
You can open the Resources page of the project properties and add any existing file, including a SQL Server MDF data file. At run time, you can get the data of the file from the appropriate property of My.Resources. The type of the data depends on the type of the file. I'd expect that an MDF file would come back as a Byte array, which you can then write to a file or whatever.
That said, you don't want to make your EXE too big by embedding several sizeable data files in it. You might be better off just using loose files in a subfolder or, if you're determined to use resources, create a satellite assembly, i.e. a DLL that contains just resources.

Is initial save to local drive more resilient than using network drive

I would like to create 100 different reports - they all need to initially be saved to one location - then they need to be distributed to various locations on our network drive.
An Excel vba program is creating the files from an xlsm file saved on the network drive.
Which of the following approaches is more resiliant/defensive?
1.Save 100 files to C-Drive of machine running the code - then distribute to 100 different locations on the network drive.
2.Save 100 files to same network location where controlling xlsm file is located - then distribute to 100 different locations on the network drive.
Well... that depends. How good is your network? =;)-
When faced with this situation, I tend to write local and them move them after the fact. In my experience, this is more reliable and faster.(Assuming you're using a cut/paste action and not a copy/paste action. Copy/paste is terribly slow.)
Now, that said, I would actually prefer to just write directly to the network drive. It's less code, less bugs, less chance for things to go wrong... IF you have a reliable network.

Vb.Net Document Storage

I am attempting to add a document storage module to our AR software.
I will be prompting the user to attach a doc/image to thier account. I will then put a copy of this file into our folder so that we can reference it without having to rely on them keeping the file in its original place. This system is not using a database but instead its using multiple flat files.
I am looking for guidance on how to handle these files once they have attached them to our system.
How should I store these attached files?
I was thinking I could copy the file over to a sub directory then renaming it to a auto-generated number so that we do not have duplicates. The bad thing about this, is the contents of the folder can get rather large.
Anyone have a better way? Should I create directories and store them...?
This system is not using a database but instead its using multiple flat files.
This sounds like a multi-user system. How are you handing concurrent access issues? Your answer to that will greatly influence anything we tell you here.
Since you aren't doing anything special with your other files to handle concurrent access, what I would do is add a new folder under your main data folder specifically for document storage, and write your user files there. Additionally, you need to worry about name collisions. To handle that, I'd name each file there with by appending the date and username to the original file name and taking the md5 or sha1 hash of that string. Then add a file to your other data files to map the hash values to original file names for users.
Given your constraints (and assuming a limited number of total users) I'd also be inclined to go with a "documents" folder -- plus a subfolder for each user. Each file name should include the date to prevent collisions. Over time, you'll have to deal with getting rid of old or outdated files either administratively or with a UI for users. Consider setting a maximum number of files or maximum byte count for each user. You'll also want to handle the files of departed users.