Remove (.unrst) file from the Eclipse/petrel simulation? - petrel

I am running my simulations with Eclipse E100, and the run generates a file with extension .unrst which takes a lot of space in the folder
Is there a way to stop the simulation from generating this file
Best Regards

If you don't want unified restart files select summmary, .UNSMRY (unified) or .Snnnn, or non-unified restart (.Xnnnn).
FMTOUT for format and/or UNIFOUT for unification (Details).

Related

Extracting large zip file onto server while pc is turned off

I've got a zip file of 1,6gb and it takes me forever to extract it on a server. I left it all night long and when i woke up it wasn't finished. There is no way to keep track how much time is left on extracting a file and how much percantage is done so i'm not sure if the whole thing works properly. Is there a way to exctract that file using File manager in Cpanel so that it can be done while the pc is off and maybe to note me on an email when it's done. I basically need to copy a webshop from live server to developers server and am just loosing too much time on that. So if anyone has a better idea how to extract it please feel free to suggest it.
P.S. Deleting of those files that did extract takes forever too
P.P.S. I'm a linux/SystemAdmin
If it's all about copying files from one server to another - why not just use rsync and avoid archiving?
I mean, if extraction is a pain - remove it from the equation :)
It is not a good ideato use the cPanel File Manager for this task, as the server will probably kill the extract process if it takes too long.
The best way to go about this would be via SSH, while logged in as root. If you need to switch off your computer, you should run it in screen.
You can also use unzipper.php which you can get from github.
It will require you to upload your file and unzipper.php too. Then run wwww.yourdomain.con/unzipper.php

Executing Abaqus Model in Taverna

I'm pretty new to both Taverna and Abaqus but I am trying to run an Abaqus model using a "Tool" in Taverna remotely on a HPC. This works fine if I already have my model file and inputs on the HPC but I need a way of uploading the files dynamically in Taverna (trying to generically wrap Abaqus models).
I've tried adding a input port that takes a file list but I don't know how I can copy it to the "location" that I've set for the tool. Could a beanshell service be the answer or can I iterate through the file list and copy them up before executing the abaqus model?
Thanks
When you say that you created an input port that takes a file list, I guess you mean an input to the tool service.
Assuming the input port is called my_file_list, when the tool service is run, it will take a list of data values on port my_file_list. As an example, say it has "hello", "hi" and "hola" is the three values in the list.
On the location where the tool service is run, it executes in a temporary directory - a different directory for each execution of the service. It is normally something like /tmp/usecase-2029778474741087696
Three files will be created in the temporary directory; those files contain the (in this example) three values the tool service received on port my_file_list. The files could be called
/tmp/usecase-2029778474741087696/tempfile.0.tmp containing hello
/tmp/usecase-2029778474741087696/tempfile.1.tmp containing hi
/tmp/usecase-2029778474741087696/tempfile.2.tmp containing hola
There will also be a file called my_input_list. That file will contain
/tmp/usecase-2029778474741087696/tempfile.0.tmp
/tmp/usecase-2029778474741087696/tempfile.1.tmp
/tmp/usecase-2029778474741087696/tempfile.2.tmp
The script of your tool service would normally read the contents of my_input_list line by line and do something with the contents of the listed file(s).
I have also seen some scripts that 'cheat' and iterate directly over tempfile*.tmp but that would be "a bad thing". The problem with that trick, is that if you want to add a second list of files to the tool service then the file my_input_list could contain
/tmp/usecase7932018053449784034/tempfile.4.tmp
/tmp/usecase7932018053449784034/tempfile.5.tmp
/tmp/usecase7932018053449784034/tempfile.6.tmp
as other temporary files were used for the other file list port.
I hope that helps
The tool service allows you to upload files - but if you are using the HPC through a job submission node, then you would have to modify your command line tool to then use the job file staging command to further push the files as part of the job. The files would be available in the current (temporary) directory of the specified tool script.
I would try to do it through the Tool service and not involve the beanshell - then you can keep your workflow simpler.
A good thing to remember is that you can write multiple shell commands in the box.
Similarly you would probably want to retrieve back the results so that you can process them further in the workflow (unless they are massive - in which case you should just output their remote filenames and send them in again to the next HPC job)
The exact commands to use for staging files and retrieving them depends on the HPC job submission system. Which one are you using?
Thanks for the input guys.
It was my misunderstanding of how Taverna uses the File list. All the files in the list are copied to the temp "sandbox" and are therefore available for use.
Another nice easy way is to zip the directory and pass the zipped files into an input port for the service. Then just unzip the files inside the command.
Thanks again

Monitoring a folder for a specific file

I have a program that uploads .txt or .rje files from a folder. Now when you put any other file format into the folder, like .jar, then the application crashes.
Now I cannot change the mechanics of the application, so I would like to know if there is a type of program/script that I can use that monitors the folder for any files that are non-txt/rje and then move them out of the folder once they are put there...
Is this possible using a script? (I do not want to use a .exe application to do this...not allowed to install 3rd party software onto the server this folder exists...)
Thank you
Your solution won't work as you have a race condition between the program doing the upload and the one doing the deletion. If upload runs first it still crashes.
The correct solution is to modify the upload program to cope with this scenario.
If that is not possible then the only safe work around would be to use a new folder to drop the files in, have a script run that constantly scans the folder and if a new file appears either move it to the processing folder or deletes it as appropriate.
(For the actual detection that's not my area of expertise but the simplest would be to have a bat file that just runs periodically (or even just runs once and loops with a wait, check, move, wait, check, move, etc) and processes everything in the folder when it runs).

Need suggestions on what tool to use for manipulating a file

We have a need to create a daily process that will manipulate a file that is now being manually generating before FTPing it to a vendor. The issues with the current file are as follows:
1) It is currently comma delimited and it needs to be pipe delimited.
2) The vendor only want specific columns to be sent. They have a limit of 26 columns.
We need to develop an automated process that can be scheduled to run once a day and pick up a file with a specific extension, do the file manipulation and FTP the file.
Ideally, we would like to have some error handling in the process. We would want an email to get sent out if there was no file to process or if there was an error during the manipulation or FTP process.
My first thought was to use SQL Server Import/Export. I've done this before but that was only for packages that could be run manually. This process needs to be fully automated (after the existing file is manually generated.) I don't see a way to pick up any file with a specific extension. It looks like I have to select a specific file.
Is there a way to use Import/Export or some similar tool?
Or, do I need to write a program to do this sort of task? It seems to me like it would be more work to write a program. So, I am trying to avoid that.
Thank you for your help!
You should write a program. Seriously.

use Archive Utility.app from command line (or with applescript)

I would like to use Archive Utility.app in an app I'm writing to compress one or more files.
Doing (from the command line):
/System/Library/CoreServices/Archive\ Utility.app/Contents/MacOS/Archive\ Utility file_to_zip
Does work but it creates a cpgz file, I could live with that (even though .zip would be better) but the main problem is that I am not able to compress 2 files into 1 archive:
/System/Library/CoreServices/Archive\ Utility.app/Contents/MacOS/Archive\ Utility ~/foo/a.txt ~/bar/b.txt
The above command will create 2 archives (~/foo/a.txt.cpgz and ~/bar/b.txt.cpgz).
I cannot get this to do what I want either:
open -a /System/Library/CoreServices/Archive\ Utility.app --args xxxx
I'd rather not use the zip command because the files that are to be compressed are rather large so it would be neat to have the built in progress bar.
Or, could I use Archive Utility programmatically?
Thanks.
Archive Utility.app uses the following to create its zip archives:
ditto -c -k --sequesterRsrc --keepParent Product.app Product.app.zip
Archive Utility.app isn't scriptable.
The -dd/--display-dots option will cause the command-line zip utility to displays progress dots when compressing. You could parse the output of zip for your progress bar. zip will also display dots if it takes more than five seconds to scan and open files.
Better would be to integrate a compression library, such as zlib or libbzip2. Both of those let you compress a portion of the data at a time. You'll need to handle progress bar updates, which you can do after compressing each block of data.
How about Automator? The "Create Archive" action would work.
I have used Archive Utility to decompress files from Applescripts:
tell application "Archive Utility" to open filePath
However, this only tells Archive Utility to start decompressing. It will complete in its own time and the applescript will continue to execute without waiting for the decompression to finish. Archive Utility will not tell the applescript when it is done.
You can use the above line if Archive Utility is already running. It will add the new file to its queue and decompress when ready.
Also, Archive Utility's preferences will control how it accomplishes the goal. For example: it might decompress to a different folder or delete the original. You can run it as a program and change the preferences if that helps.
I have not used this to get Archive Utility to compress files.