Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I presume that this command first creates tar file and then gzip's it.
tar -zcvf file.tgz /somefolder
The question is, is full .tar file first written somewhere on the disk and then gzip'ed, or is it put in RAM memory and gzipped from there?
Both gzip and tar are running at the same time, with tar piping its output into gzip. The entire tar file never exists anywhere, either in RAM or on the disk. Chunks of the tar file exist temporarily in RAM before being compressed by gzip, and the compressed output is written to disk.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Having an Angular project I have a node_modules directory in my project dir.
That is pretty full with all the files of the modules I use.
I like to periodically save the project folder for backup. Doing this takes a bit of time because of node_modules.
Is it a bad idea to remove nodes_modules before backup and then after doing more coding rebuild it with
npm install
?
Or maybe theres a better way to have smaller backups?
EDIT
I do git and also this directory-backup. My question is regarding the directory-backup only.
Your package.json file act as a blue print for your required node modules with versions of every node module being used in the project, hence keeping a back up of node_module doesn't make sense, as you can get it back with a npm install on your project anytime
If you are using Git, you can ignore node_modules by adding the following in .gitignore file
# dependencies
/node_modules
node_modules and package-lock.json should not be backed up it should be installed when used as all data is present in package.json
Please use a version control system shuch as git instead of manual backups
Check this link for better understanding
https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I'm trying to setup fzf.vim for vim on windows 10.
You can use an alternative find command like ripgrep or fd which is supposed to respect .gitignore.
My .gitignore file has this line, which is working fine for git commits, etc.:
node_modules/
My dir structure is
/working directory
.gitignore file
.git dir
/node_modules dir
When I run
fd --type f
or
rg --files
It lists all files in node_modules.
I feel like this may be a windows problem.
How can I get these programs to use .gitignore to ignore node_modules?
Turns out, I was using a project where the git repo had not yet been initialized. So I had a .gitignore, but did not have a .git folder. And by default ripgrep needs it to be an actual git repository to utilize the .gitiginore folder. My solution was to use the following flag -
--no-require-git
More info here
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have the following folder structure:
-Videos
-1. Summer
summer.mp4
summer.srt
summer2.zip
-2. Winter
winter.mkv
winter.vtt
- ..
How can I create a batch or powershell script that results in the following folder structure:
-Videos
-1. Summer
summer.mp4
1. Summer.7z
-2. Winter
winter.mkv
2. Winter.7z
- ..
So basically iterate through all sub-folders, zip all the contents except video formats using 7zip and delete the original files that were zipped. I'm open to suggestions using VB!
After 2 weeks of research I managed to do it myself. Here's the answer for anyone with a similar situation:
#echo off
cls
set zip="C:\Program Files (x86)\7-Zip\7z.exe"
set loc=C:\User\Downloads\Videos\UnConverted
cd %loc%
for /d %%D in (%loc%\*) do (
for /d %%I in ("%%D\*") do (
cd %%I
%zip% a -t7z -mx9 "%%~nxI.7z" *.txt *.html *.vtt *.srt *.zip *.jpeg *.png
del /S *.txt *.html *.vtt *.srt *.zip *.jpeg *.png
)
)
cd %loc%
echo ----- Compression Complete! -----
pause
The first for loop iterates through the every subfolder of the root folder while the second for loop iterates through each subfolder of the root folders subfolders. After that
cd %%I - To enter the sub-sub-folder
%zip% - To call the 7z cli
a - add files to archive
t7z - using the .7z extension
mx9 - with maximum compression
%%~nxI.7z - to store the files using the sub-sub-folders name
*.txt *.html *.vtt *.srt *.zip *.jpeg *.png - file extensions to archive
after that the del deletes the archived files.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Are there any tools for converting texinfo files into something Doxygen can process (presumably markdown)? I have a bunch of old texinfo files that I'd like to link in with the doxygen docs we have. I guess I'll generate html from the texinfo and link to that from doxygen source files if I have to, but I'd rather integrate the texinfo docs into the doxygen ones.
I have been struggling with this on and off for legacy documentation on my project. I've found a solution that, while not completely automated, is workable. Since Doxygen can process markdown files, I've been converting my texi files into mardown.
Convert the texi file to html via texi2html
$ texi2html foo.texi
Convert the html file to markdown via pandoc
$ pandoc -f html -t markdown -o foo.md foo.html
Clean up the resulting markdown file with your markdown editor of choice. There are a plethora of them, but on OSX I use Markdown Pro.
Edit your Doxyfile and tell Doxygen to process the markdown files in the directory. Either add the file to your INPUT list, or add the .md extension to the FILE_PATTERNS tag. More information about Doxygen's markdown support may be found here:
http://www.doxygen.nl/manual/markdown.html
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm using Wix Installer to copy a folder under program files folder. But I couldn't do it for entire folder. I can do it only by file by file basis.
I would appreciate any help on this regard
<Directory Id="CopyTestDir"...>
<Property Id="SOURCEDIRECTORY" Value="c:\doc\bin\path" />
<Component Guid="A7C42303-1D77-4C70-8D5C-0FD0F9158EB4" Id="CopyComponent">
<CopyFile Id="SomeId" SourceProperty="SOURCEDIRECTORY"
DestinationDirectory="CopyTestDir" SourceName="*" />
</Component>
It doesn't handle subdirectories though. If you don't have a known directory
structure for the source files, then you'll need to pursue the semi-custom
action approach, writing entries into the MoveFile table for each directory.
source