script in mac app temporary location - objective-c

I am a bit new to Mac OS X app development and I got stuck with Mac OS moving some Resources to a temp location wereas the her ones remain in the Resources folder. I have searched through google for a few hours and that did not help.
Here is my situation in a little more detail:
I have an application build in Xcode. It consist of an ObjC TaskWrapper and a perl script (main.pl) that is being called after the app is run. There is also an external file(say the name is extrafunctions (no extension)), that I need to load from the perl script in the runtime of application. Both the perl script and the external file are in the Resources folder inside the .app directory after building it from xcode.
After execution the main.pl file is moved to a temp location. The extrafunctions file remains in Resources inside the myMacApp.app folder. Then, when main.pl tries to load extrafunctions (used to be in the same directory) and fails, as extrafunctions has not been copied to the temp folder along.
I would need to access this extrafunctions file from the perl script, but cannot come across neither a way to get the msipp location, nor a way to let it be copied to the temp location as well.
Overview:
- main.pl = script in Resources inside myMacApp.app
- extrafunctions = external script to be loaded by main.pl in Resources
- main.pl tries to load extrafunctions: does not work
some code:
install.pl
...
if(-f "extrafunctions") {
open my $fh, "<", "extrafunctions" or die "Could not open extrafunctions : $!";
$content = do { local $/; <$fh> };
close $fh;
} else {
...
}
Any help highly appreciated!

Related

Laravel move url downloaded file to storage clean solution

Problem statement
I am using Laravel Framework 8.83.23
being given an url, a file A gets downloaded using wget to a file system
A brief example just to get an idea:
$cmd = sprintf(
'"%s" --no-check-certificate "%s" -O "%s" 2>&1',
$wgetPath,
$url,
$filePath
);
$ret = exec($cmd, $output, $result_code);
the wget is executed using exec() on a linux file system
file A then needs to be moved to a Storage, let's call it Storage::disk('posters')
I want the Storage facade to handle the file, because, for example, if wget would download the file directly to the storage path using known file system path string, the Storage facade would not be triggered - you no longer can use Storage::fake('posters') in tests, because the file would get downloaded to the same directory, not the fake one
Proposed solution idea
I create a new disk, called Storage::disk('temp')
the wget will download the file to that directory (disc) using string path. The Storage facade will not trigger using code like storage_path(sprintf("app{$DS}temp{$DS}%s.%s", $fname, $extension)); where $DS is directory separator
then, move the file from Storage::disk('temp') to Storage::disk('posters')
working code
Storage::disk('posters')->put(
$fName . '.jpg',
Storage::disk('temp')->get($fName . '.jpg')
);
the code above will use the Storage facade, and therefore, Storage::fake('posters') can be used in tests
Question
is there a cleaner solution you would propose? Creating a new storage disc for temporary files directory does not seem clean to me
I am not sure, whether there is some temporary storage already available for me to use?

Perl - open with relative path Apache

We recently inherited a somewhat legacy perl application and we're working to migrate to a new server as well as setup a sandbox environment so we can start sorting the application out. The issue we're having is the code currently uses some relative paths for the open method. And currently works in their shared hosting environment (we have little access to)
open(HANDLE,"<../relative/path/to/file.txt")
We pulled all of the code, paths, etc. over and for the most part have the application up and running until we run into one of the scripts that does the above, opens a file with a relative path. Then it fails.
If we run the code via the command line, the relative paths work. If we modify the path to be the full path it works both via command line and through Apache (navigating to the page in the browser).
This makes me think there is some module or configuration option we need to set in Apache to allow for the perl scripts to access or use the open command with relative paths?
Relative paths are relative to the process's current work directory, which is not necessarily the same as a the directory containing the program. It fact, it's often / for daemons (and thus their children).
For paths relative to the program's location, use
use FindBin qw( $RealBin );
my $qfn = "$RealBin/relative/path/to/file.txt";
open(my $HANDLE, "<", $qfn)
As mentioned in the comment above. Our final solution was to use:
File::Basename qw( dirname );
chdir(dirname($0));
This allowed us to get the code working while also kept additional code modifications to a minimum.
I was running a script out of cgi-bin and wanted to open a template in htdocs/templates so I did this in my script...
# open the html template
my $filename = sprintf("%s/templates/test.tmplt", $ENV{DOCUMENT_ROOT});
my $template = HTML::Template->new(filename => $filename);
It is bad practice to specify file's path in open with a fixed string unless the path is predefined and never change - as for example with /etc/fstab in linux.
You should change the code to use variable(s) instead.
Define variable at the top of the script -- in feature if you need to change base and path you will know that you find it at the few first lines of the code.
In such situation add temporary in the code something as
use strict;
use warnings;
open( my $fh, '>', 'my_uniq_file.txt')
or die 'Couldn\'t open my_uniq_file.txt';
print $fh 'This directory is default base for path location';
close $fh;
Once you run your script from webserver you should look for the file my_uniq_tile.txt -- it's location will be default base of web server for the file(s) location.
Now adjust variables with file path accordingly.
use strict;
use warnings;
my $dir_project = './project_1/';
my $audio_data = 'audio_data.dat';
my $video_data = 'video_data.dat';
my $descr_data = 'descr_data.dat';
my $qfn = $dir_project . $audio_data;
open(my $fh, '<', $qfn)
or die "Couldn't open $qfn";
while( <$fh> ) {
chomp;
[do something with data]
}
close $fh;

Pyinstaller does not work with local files

I've made an app with PyQt5 and it works perfectly fine on my environment, and now I wan't to deploy it to .exe and .dmg with pyinstaller.
My app uses two local files certificate.yml and data.pkl which each contains certificate data for AWS and data. These are located in the same directory with main.py, which starts my app.
In my main.spec file I've added following
a.datas += [('certificate.yml', 'certificate.yml', 'DATA'),
('data.pkl', 'data.pkl', 'DATA')]
and made .app. However, when I start my .app, it does not find certificate.yml file and raise following error.
FileNotFoundError: [Errno 2] No such file or directory: 'certificate.yml'
How can I include my local files with pyinstaller?

Folder.CopyHere() from IShellDispatch silently fails to unpack most of a Zip file, but only if the file is on a DVD

My application (the Endless OS installer for Windows) uses methods on IShellDispatch (provided by Shell32.dll), to extract a Zip file (example). In various modes of operation, this file may be downloaded from the internet to a fixed disk; on an exFAT filesystem on a USB stick; or in an ISO 9660 (with Joliet extensions) image which may be mounted as a virtual drive, or written to a DVD. In all cases but the last, extracting the Zip file works; but when the Zip file is on a DVD, all that's created in the target directory is the directory structure (EFI\BOOT\) for the first file in the archive (EFI\BOOT\bootx64.efi); neither that file, nor any other files in that directory or any other directory, are extracted. With exactly the same Zip file on any other medium — including inserting the ISO into a VirtualBox virtual optical drive — the problem disappears.
The original C++ code where I first saw this problem is here. It looks like this (with error handling removed, since all methods return a successful HRESULT in my testing, and the FOF_NO_UI also removed in case that was masking an error message):
void UnpackZip(const CComBSTR source, const CComBSTR dest) {
CComPtr<IShellDispatch> pISD;
CComPtr<Folder> pToFolder, pFromFolder;
CComPtr<FolderItems> folderItems;
CoCreateInstance(CLSID_Shell, NULL, CLSCTX_INPROC_SERVER, IID_IShellDispatch,
(void **)&pISD);
pISD->NameSpace(CComVariant(dest), &pToFolder);
pISD->NameSpace(CComVariant(source), &pFromFolder);
pFromFolder->Items(&folderItems);
pToFolder->CopyHere(CComVariant(folderItems), CComVariant(0));
}
I can reproduce this problem both by attempting to unpack the Zip file from Windows Explorer GUI (which does not report any errors), and by running the following in PowerShell, so I am reasonably sure it's not my application code that's at fault:
PS> $shell = new-object -ComObject shell.application
PS> $zip = $shell.NameSpace("D:\endless\eos-eos3.3-amd64-amd64.171122-232702.base.boot.zip")
PS> $target = $shell.NameSpace("C:\test")
PS> $target.CopyHere($zip.items())
If I explicitly iterate over the top-level folders in the Zip file, as follows, then some but not all files from each folder are extracted (and still none in EFI\BOOT\):
PS> foreach ($item in $zip.items()) { $target.CopyHere($item) }
If I explicitly select that first file which is not unpacked, no error is raised:
PS> $item = $zip.items().Item(0).GetFolder().Items().Item(0).GetFolder().Items().item(0);
PS> $item
Application : System.__ComObject
Parent : System.__ComObject
Name : bootx64.efi
Path :
GetLink :
GetFolder :
IsLink : False
IsFolder : False
IsFileSystem : False
IsBrowsable : False
ModifyDate : 22/11/2017 23:33:56
Size : 1157984
Type : EFI File
PS> $target.CopyHere($item)
But it's still not unpacked to $target. The DVD drive does not even spin up!
If I copy exactly the same Zip file to a fixed drive – or mount the DVD ISO as a virtual disk, whether within Windows, or from the outside via the VirtualBox VM Windows is running in – everything works correctly. The problem only occurs when the archive is really on a physical DVD. I've received many reports of this problem from users with various hardware, so it's not my DVD drive or laptop. I personally have only tested and reproduced it on Windows 10 (build 14393 and 15063, at least); I'm not sure whether it can be reproduced on older Windows versions but since Windows 10 is the most commonly-used version by users of this application, it's a moot point whether this worked on older versions.
The files which are not unpacked are all those ending .efi (EFI executables) and those ending .mod (legacy BIOS GRUB modules). This is totally deterministic. But I'm stumped as to why the shell would take such a disliking to certain files, only when the archive is on a DVD.
My application can work around this problem by copying the Zip file to the hard disk before extracting it. But the question remains: why is this happening? And at a higher level: short of stepping through the compiled Shell32.dll code in a debugger, how could I diagnose what's going wrong?

Endeca Forge failure move_dgraph-input_to_dgraph-input-old

I have an intermittent issue when distributing indexes :( All servers are Windows Server 2008.
I have two servers to distribute to and one of them has failed twice with this error:
INFO: [MDEXHost1] Starting shell utility 'move_dgraph-input_to_dgraph-input-old'.
10-Jun-2015 06:08:36 com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript
SEVERE: Utility 'move_dgraph-input_to_dgraph-input-old' failed.
With a bit of further digging I've found this error in a log file in the PlatformServices\workspace\logs\shell folder:
Failed to move D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input to
D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input_old: No such file or directory at -e line 1.
The state of the server is that is has a dgraph_input_new folder but it's struggling to create the dgraph_input_old folder. The dgraph_input folder does exist so the 'No such file or directory' is interesting.
The server has plenty of disk space for the operation and as it's intermittent I don't think it's file/folder permissions (otherwise it would fail all the time). I've even asked for on-access virus scanning to be disabled for those folders in case our virus scanner was locking files/folders.
I'm struggling to come up with a resolution to the issue, HALP!
EDIT: The forge process did stop the dgraph but the TomCat6 process is still running. Is that normal? Could TomCat be locking the folder?
EDIT: The task to move the folder is a bit of Perl that looks like this:
perl.exe -e "use strict; use File::Spec; use File::Copy; use File::Glob qw/:glob/;my $source = 'D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input'; $source =~ s/[\\\/]+$//;my #sources = bsd_glob($source); foreach my $file (#sources) {my #fromPath = File::Spec->splitdir($file); if (scalar #fromPath eq 0) { die \"Failed to split path: $!\"; } my $fromRelative = #fromPath[scalar #fromPath - 1];my $toFile = 'D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input_old'; if ( -d $toFile ) { $toFile =File::Spec->catdir($toFile, $fromRelative); } my $res = move($file, $toFile);if (! $res) { die \"Failed to move $file to$toFile: $!\"; }}"
EDIT: It seems to be a plain permission issue, I can't rename the folder without elevating myself to an administrator. The service is running as a user who's in the Administrators group.
What could happen to make this folder admin only?
I know this question is dated back 3 years ago. But recently i experienced the same issue, thought it could help others.
The logs were interesting as GogLlundain pointed out.
The way to solve this is.
Stop mdex server in the workbench which will in parallel kill
dgraph process too.
If you open the task manager in the server where mdex is defined, you can
find two dgraph.exe is running.
Kill the older task(i.e dgraph.exe) then run the baseline script, Your
process will run smoothly.