Is there a method available to store a file in mongo under a specific directory - pymongo

I need to maintain a file system in mongo where I need directories to be created and file should be placed in the directory which i have created. So is there any in built functionality in python or how we can do that using GridFs?. So basically along with uploading the file i need to mention the directory where it needs to be placed

GridFS permits specifying the name of the file. You can put anything you like in there including a path where components are separated by slashes so that it looks like a filesystem path.
However, GridFS does not provide any facilities for hierarchical traversal of stored files (i.e. grouping files into directories). You'd have to implement that in your own application/library if you need this functionality.

Related

Laravel delete file while developing windows

In my local development environment (xampp, windows) I have the file
D:\Documents\repos\somerepo\public\img\bathroom\8\small\someimage.jpg
I try to delete it with:
Storage::delete(public_path('img/bathroom/8/small/someimage.jpg'));
However the file still exists after running the code.
When I print the path of public_path('img/bathroom/8/small/someimage.jpg') And copy it inside file explore it opens the file just fine.
How can I make laravel delete the file?
When I run:
if(File::exists($path)){
return $path." Does not exist";
}
Where path is public_path('img/bathroom/8/small/someimage.jpg') it tells me it does not exist.
Assuming you are using the default filesystem configuration, the public disk stores files in storage/app/public. The local disk uses storage/app. In short, both local disks manage files under storage/.
The public_path() helper returns the fully qualified path to the public directory. So:
public_path('img/bathroom/8/small/someimage.jpg')
will generate a path like this:
/your/project/public/img/bathroom/8/small/someimage.jpg
Note that this is not under storage/, where both Storage local disks operate. Passing Storage a fully qualified path outside the root of the filesystem it manages will not work.
To work with files outside the roots of the disks that Storage is configured for, you will have to use standard PHP functions like unlink(), etc. Alternatively, move the files you want to maintain with Storage into one of the disks it is configured for, add the symlink to make them visible, and update the references to those files in your views etc.

How to use a common file across many Terraform builds?

I have a directory structure as follows to build the Terraform resources for my project:
s3
main.tf
variables.tf
tag_variables.tf
ec2
main.tf
variables.tf
tag_variables.tf
vpc
main.tf
variables.tf
tag_variables.tf
When I want to build or change something in s3, I run the Terraform the s3 directory.
When I want to build the ec2 resources, I cd into that folder and do a Terraform build there.
I run them one at a time.
At the moment I have a list of tags defined as variables, inside each directory.
This is the same file, copied many times to each directory.
Is there a way to avoid copying the same tags file into all of the folders? I'm looking for a solution where I only have only one copy of the tags file.
Terraform do offer a solution of sorts using the "local" verb, but this still needs the file to be repeated in each directory.
What I tried:
I tried putting the variables in a module, but variables are internal to a module, modules are not designed to share code into the main file.
I tried making the variables an output from the module but it didn't like that either.
Does anyone have a way to achieve one central tags file that gets used everywhere? I'm thinking of something like an include of a source code chunk from elsewhere? Or any other solution would be great.
Thanks for that advice ydaetskcoR, I used symlinks and this works perfectly.
I placed the tags list.tf file in a common directory. Each Terraform project now has a symbolic link to the file. (Plus I've linked some other common files in this way, like provider.tf).
For the benefit of others, in Linux a symbolic link is a small file that is a link to another file, but it can be used exactly like the original file.
This allows many different and separate Terraform projects to refer to the common file.
Note: If you are looking to modularise common Terraform functions, have a look at Terraform modules, these are designed to modularise your code. They can't be used for the simple use case above, however.

Flowgear access to files on the local file system

I am creating a Flowgear workflow that needs to process a raft of XML data.
I have the xml data contained in a set of .xml files (approximately 400 files) in a folder on my local machine hard-drive and I want to read them into a workflow, run an XSLT transform and then write out the resultant XML to another folder on the same local hard-drive.
How do I get the flowgear workflow to read these files?
It depends on the use case, the File Enumerator works exceptionally well to loop (as in for-each) through each file. Sometimes, one wants to get a list of files in a particular folder and check whether a file has been found or not. For this, I would recommend a c# script to get a list of files with code:
Directory.GetFiles(#"{FilePath}", "*.{extension}", SearchOption.TopDirectoryOnly);
Further on, use the File node to read, write, or delete files from a file directory.
NB! You will need to install a DropPoint on the PC/Server to allow access to the files. For more information regarding Drop Points, please click here
You can use a File Enumerator or File Watcher to read the files up. The difference is that a File Enumerator will enumerate all files in a folder once, the File Watcher will watch a folder indefinitely and provide new files to the workflow as they are copied into the folder.
You can then use the File node to write the files back the the file system.

jars, external properties, and external file io

I checked quite a few similar questions, but so far I am unsatisfied with the solutions.
Ever use the Minecraft Server? At initial launch, it creates all the files and folders it needs, and allows you to make changes to files like Server.properties and ops.txt by making them external of the executable jar file.
I'm working on a similar project, and I want to duplicate that behavior. Everything works great when I run it in eclipse. When I export to a jar file though, things get funky. The external files and folders are created without a hitch, but afterword, it would appear as though they cannot be read from or written to. Any ideas how Notch made his server?
--edit--
Scratch that, it doesn't even appear to reliably create the files and folders. Maybe it only creates them the very first run after creation?
--edit again--
It creates them in the root directory. When I tested it in eclipse, the root directory was limited to the folder containing the project, and therefore looked fine. The solution was to make the class aware of it's location, and include it in all file operations.
Have the main class in your executable jar file look up where it is, then have it store that information in a global String or something. Prefix your filenames with that string in your file operations, and voila! It's writing to the correct directory.

How do I create resource packages with a filetype? NSBundles?

I am not sure what it is called, but I'm looking for a way to create packages (a directory containing a bunch of non-executable files, i.e. images) with custom file extensions. For example, I want to create a theme package named example.theme, which contains a bunch of images. How can I achieve this?
Just create a directory with that name. On iOS, there is no file manager, so you don't have to worry about it appearing as a bundle or just as a regular directory.