Error setting up WORHP license and parameters file - worhp

I am trying use WORHP toghether with AMPL on my Linux Mint (v19.3) machine, but I just can't figure out how to properly set up the license file (and probably the .xml file as well).
I've placed AMPL's and WORHP's binaries, as well as libworhp.so, .lic and .xml files, in the same directory as follows:
user#laptop:/opt/AMPL$ ls -la
total 121676
drwxr-xr-x 3 root root 4096 May 30 22:38 .
drwxr-xr-x 9 root root 4096 May 29 21:43 ..
-rwxr-xr-x 1 500 users 1226960 May 2 17:47 ampl
-r--r--r-- 1 user user 690 May 29 20:01 ampl.lic
-rwxr-xr-x 1 root root 4100512 May 29 22:00 libworhp.so
-r--r--r-- 1 root root 175 May 30 22:53 rosenbrock.mod
-rwxr-xr-x 1 root root 339400 May 29 21:43 worhp_ampl
-r--r--r-- 1 user user 1122 May 30 21:56 worhp.lic
-r--r--r-- 1 root root 20239 May 29 22:34 worhp.xml
The binary file is called worhp_ampl and rosenbrock.mod is a valid AMPL example file.
Still, I get the following error message whenever I try to solve the model with:
user#laptop:/opt/AMPL$ ampl rosenbrock.mod
ReadParamsNoInit: Used parameter file worhp.xml
* Read 268/268 parameters.
WORHP: Using data file /tmp/at9188. Error (License): Could not open license file.
* Local MACs:
- 00:90:f5:93:e9:62
- 74:f0:6d:85:27:ee
WorhpInit: Could not obtain license.
Unsuccessful termination: License error.
Error (AMPL_Init): Error in WorhpInit.
exit value 1
<BREAK>
This error only happens if I set WORHP's corresponding parameter and license environment variables (because I need to use WORHP and AMPL outside of this installation directory).
user#laptop:/opt/AMPL$ echo $WORHP_PARAM_FILE
:/opt/AMPL/worhp.xml
ueser#laptop:/opt/AMPL$ echo $WORHP_LICENSE_FILE
:/opt/AMPL/worhp.lic
On the other hand, everything works (only inside the /opt/AMPL directory) if I remove the declaration of WORHP_PARAM_FILE and WORHP_LICENSE_FILE from my .bashrc.
I couldn't figure out how to do this just by reading WORHP's User's Guide, so I would like to kindly ask for a little help with this issue.

I managed to fix the problem by adding the following to my .bashrc file
WORHP_PARAM_FILE=/opt/AMPL/worhp.xml
export WORHP_PARAM_FILE
WORHP_LICENSE_FILE=/opt/AMPL/worhp.lic
export WORHP_LICENSE_FILE
Instead of the previous:
export WORHP_PARAM_FILE=$WORHP_PARAM_FILE:/opt/AMPL/worhp.xml
export WORHP_LICENSE_FILE=$WORHP_LICENSE_FILE:/opt/AMPL/worhp.lic

Related

Wildfly leave Two Orphaned File Descriptors after uploading a file

I'm running Wildfly version 14 and version 18 (on different machines) and Primefaces. Whenever I upload a file, I get 2 orphaned fds. I've doubled checked my code and all resources are closed. I didn't have any problem running Wildfly 11, btw. I also use lsof to make sure that the opened files belong to Wildfly, and they are. Eventually, I get the Too Many Open Files error.
ls -alFtr /proc/30724/fd|grep elete
lr-x------ 1 ora ora 64 Apr 3 09:36 594 -> /PATH_TO/undertow1607766259253292434upload (deleted)
lr-x------ 1 ora ora 64 Apr 3 09:40 591 -> /PATH_TO/undertow1607766259253292434upload (deleted)
Googling the problem gave me several RedHat links, but I can't find any solution to my problem. Any ideas?
Yes if you are using PF 7.X this was a bug and fixed in PF8.0.
See: https://github.com/primefaces/primefaces/issues/5408

Directory Permissions CentOS

The scenario I'm trying to achive is the following:
1 directory created by root and give access to a group of people.
Therefore i create a group called 'testgroup' and a new user 'testuser' that belongs to 'testgroup'.
I gave permissions rwx to owner and rw to group and none to the others.
But the testuser cannot go inside this directory. What i'm doing wrong ?
-bash-4.1$ ls -la
total 12
drwxr-xr-x 3 root root 4096 Sep 12 21:02 .
drwx--x--x 46 root root 4096 Sep 12 21:02 ..
drwxrw---- 6 root testgroup 4096 Sep 12 21:02 test
-bash-4.1$ cd test
-bash: cd: test: Permission denied
-bash-4.1$ id
uid=32010(testuser) gid=32015(testgroup) groups=32015(testgroup)
-bash-4.1$
Also i tried relloging - but still couldnt have access to that directory.
If i give him execute permissions he can change to that directory. But i dont want to allow him executing scripts, just read/write. Is that possible ?
You need to set the execute bit on your 'test' directory for the 'testgroup' group
chmod 770 test (rwx for owner, rwx for group, no access for everyone)
A directory must have the execute bit set in order to access the files within.
When applying permissions to directories on Linux, the permission
bits have different meanings than on regular files.
The write bit allows the affected user to create, rename, or delete
files within the directory, and modify the directory's attributes
The read bit allows the affected user to list the files within the
directory
The execute bit allows the affected user to enter the directory, and
access files and directories inside

How to edit read only file, as root without sudo

I have a thecus home server, I'd like to edit the index.php file located under /img/www/htdocs/index.php however it tells me every time I vi that it's 'Read-only'.
I checked it's file permissions using ls -l index.php:
-rw-r--r-- 1 root root 7619 Mar 29 2013 /img/www/htdocs/index.php
From my understanding, the -rw first in the permissions, stands for the ownership permissions, and the owner is root in the group of root.
I have ssh'd into my server using:
ssh root#server.com
Once I login, it say's
root#127.0.0.1:~#
I have tried changing it's ownership, chmodding it, using vi to change permissions, trying to force it doesn't work either, how can I edit this damned file ! :(
When I try to use sudo it say's the command is not found, so I'm assuming that's because Thecus have stripped down the commands.
The output of mount without any arguments, I have noticed that the directory that I'm currently working in, is actually set to ro, is there a way I can edit this?
/dev/cloop2 on /img/www type ext2 (ro,relatime,errors=continue,user_xattr,acl)
Any help would be great! :)
Try mount -o remount,rw /img/www/, if that is not possible, you can copy the contents to a place where you can modify them, unmount the original /img/www/ and then symlink or "bind mount" the new location there.

Rails app folder is large due to GIT pack file

So I noticed that my Heroku slug size was huge (100mb) and decided to inspect my Rails project structure to see what was causing the huge size.
When I inspected the project folder it said that it was approx 100mb in size. However, after inspecting all the (first level) children files and folders I couldn't see any culprit (my logs are deleted on a daily basis).
I tried deleting the entire contents (code is up on github of course!) but when I inspect the folder it still says its 99.9mb on disk - weird.
I sudo'd into the folder in terminal and ran ls -al to see if there were any hidden files or folders but only got the following:
-rw-r--r-- 1 <name> staff 15364 15 Apr 16:03 .DS_Store
drwxr-xr-x 3 <name> staff 102 17 Apr 2012 .bundle
drwxr-xr-x 18 <name> staff 612 15 Apr 15:18 .git
-rw-r--r-- 1 <name> staff 487 4 Apr 10:21 .gitignore
Nothing unusual there I think.
Has anyone seen a similar situation and can advise a solution?
UPDATE
So it looks as though the .pack file(s) in the .git folder are the issue here. I did a little searching and ran
git filter-branch --index-filter 'git rm -r --cached --ignore-unmatch unwanted_folder' --prune-empty
and
git gc --aggressive --prune
Which removed all but one of the pack files. Whats confusing me is why the remaining pack file is 75mb when all code and assets come to 25mb? I'm guessing its keeping a history of the changes made locally since the repo was initialised or something? If so, how do I clear these down? I store all changes on github anyway so the old files/versions are redundant (unless I'm missing something?).

What does "the trustAnchors parameter must be non-empty" mean?

I'm trying to use JetS3 to access Amazon S3 in an app which also uses Jersey with Grizzly (unsure if that is relevant). My dev environment is Eclipse on OSX 10.7.3 using JRE version 1.7.0u.jdk.
I've read that it relates to not being able to find a "keystore", whatever that is - but it shouldn't need to use any local keys, I'm already providing it with the authentication information for S3 programmatically.
I don't know if this is an issue with my code, or with my dev environment, can anyone help?
edit: I added the following on the command line:
- Djavax.net.ssl.keyStore=/Library/Java/JavaVirtualMachines/1.7.0u.jdk/Contents/Home/jre/lib/security/cacerts
This file exists, but I'm still seeing the same error :-(
The intersection of Java's file tree and Apple's packaging system strikes again!
I just solved something similar to this (I think the legacy of a botched beta upgrade). Same error, at least. The situation I found on my disk was that there were symbolic links in my JDK installation instead of actual files (including cacerts):
> ls -lt /Library/Java/JavaVirtualMachines/1.6.0_30-b12-404.jdk/Contents/Home/lib/security/
total 24
lrwxr-xr-x 1 root admin 79 Apr 7 15:11 blacklist -> /System/Library/Java/Support/Deploy.bundle/Contents/Home/lib/security/blacklist
lrwxr-xr-x 1 root admin 81 Apr 7 15:11 cacerts -> /System/Library/Java/Support/CoreDeploy.bundle/Contents/Home/lib/security/cacerts
lrwxr-xr-x 1 root admin 87 Apr 7 15:11 trusted.libraries -> /System/Library/Java/Support/Deploy.bundle/Contents/Home/lib/security/trusted.libraries
Unfortunately the linked Deploy.bundles did not exist.
In my case, I was able to look back in Time Machine, and find the deleted bundles and restore them.
You may have some older versions already in place that you could link to. At the least you should be able to look and see if you've got a similar underlying issue.
Sorry it's not a complete solution, but I hope it gets you a little farther down the road.
You could always just get the distribution from Oracle, and pop the cert files in place, though if your installation is missing other items there might be other problems.
On google I found this blog:
http://architecturalatrocities.com/post/19073788679/fixing-the-trustanchors-problem-when-running-openjdk-7
The problem there is the openjdk not including the files, and he recommends linking to the Bundle file that I had to restore in my case.