Composer Akeneo installation Could not open input file composer.phar - apache

I am trying to install a PIM named Akeneo. The system requirements are all checked:
https://docs.akeneo.com/2.3/install_pim/manual/system_requirements/system_requirements.html
I am on Apache2 / Ubuntu 16.04
My /var/www/ directory all belongs to www-data:www-data ( ran chown -R www-data:www-data to install )
Error message from server : “Could not open input file: composer.phar”
Googled about it , got many results, and looked on the forums on stakoverflow, found answers from 2010 to 2018. I think I have maybe found out that the problem could be linked with the file "composer.phar", the way symlinks work on apache2, and composer installed globally or "inside individual project(s)".
Following 2 tutorials, I had to install composer to continue the install for that PIM.
My server says:
Composer (version 1.10.10) successfully installed to:
/usr/local/bin/composer
If I navigate to cd ~ and run ls I see there is "composer.phar" and "composer-setup.php" here.
I am confused because the Akeno tutorial says:
After extracting the file, change into the Akeneo directory and run
the commands below:
cd /var/www/html/akeneo/pim-community-standard
sudo php -d memory_limit=3G ../composer.phar install --optimize-autoloader --prefer-dist
sudo php bin/console cache:clear --no-warmup --env=prod
...
I don't understand the composer.phar install part . There are no composer.phar file inside any directories : not in /var/www/html/akeneo/ and not in /var/www/html/akeneo/pim-community-standard
Was it supposed to generate a composer.phar file there ? Should it find composer.phar one level above ( ../composer.phar ) ? I doubled checked the Akeneo PIM files , and the original .rar archive has no composer.phar file. Or, is it something to do with a symlink that accesses the global "composer.phar" which was with the global install ? Regarding symlinks, there are some in the "vendor" fodler of the PIM, and runing ls -l -a gives :
lrwxrwxrwx 1 www-data www-data 28 Feb 5 2020 doctrine -> ../doctrine/orm/bin/doctrine
lrwxrwxrwx 1 www-data www-data 34 Feb 5 2020 doctrine-dbal -> ../doctrine/dbal/bin/doctrine-dbal
lrwxrwxrwx 1 www-data www-data 46 Feb 5 2020 doctrine-migrations -> ../doctrine/migrations/bin/doctrine-migrations
lrwxrwxrwx 1 www-data www-data 56 Feb 5 2020 requirements-checker -> ../symfony/requirements-checker/bin/requirements-checker
lrwxrwxrwx 1 www-data www-data 51 Feb 5 2020 var-dump-server -> ../symfony/var-dumper/Resources/bin/var-dump-server
There is a composer.json and composer.lock file inside /var/www/html/akeneo/pim-community-standard . So confused because the turorial says to be inside pim-community-standard directory and run this : php -d memory_limit=3G ../composer.phar install --optimize-autoloader --prefer-dist
I hope I can resume the install without breaking anything. Some posts give the solution of updating composer, or installing composer-phar, inside the project. I am honestly totally lost.
( I don't know if I should bring that up, but is docker needed (not talked about in tutorial - but I see a folder with "docker" in it and I know that composer and docker can work together sometimes )

I moved the composer.phar file into the app directory. Now the install advanced. It's not a technical issue a misconfiguration or anything like that.
It's a lack of information about Composer on the Akeneo site but it's probably assumed that users should have knowledge about Composer allready.
Solution for me : moving the "composer.phar" file (from 'home' for me ) to the /var/www/project/ (.. the correct level directory).
So for my case it was a global / local install issue , which is still unclear, and I will have to look deeper how to properly install composer. Not sure if my current setup will not cause any issue later since it's like I installed it globally at first then moved only one file.
Also, command "composer" is not returning anything. I do have the composer file in /usr/bin/ though. This is very strange.

Related

Homebrew Apache directories on MacOS Big Sur

I went thru any tutorial I could find to install and configure Apache on Big Sur. Invariably, the tutorial when discussing the config files will point to /usr/local/etc/httpd/httpd.conf that needs to be massaged a bit. The only trouble is that the file doesn't exist when I check that dir. Any idea? Thanks for your help in advance.
EDIT: I did an uninstall and install of httpd per requested and I still don't get the /usr/local/. There must be something different the homebrew configuration.
➜ ~ brew uninstall httpd
Uninstalling /opt/homebrew/Cellar/httpd/2.4.49... (1,660 files, 31.9MB)
➜ ~ brew install httpd
==> Downloading https://ghcr.io/v2/homebrew/core/httpd/manifests/2.4.49
Already downloaded: /Users/johnny/Library/Caches/Homebrew/downloads/6c60d66c3915be5c993e144a743960b9e6be26e557efeeb6c61f530c79ffed34--httpd-2.4.49.bottle_manifest.json
==> Downloading https://ghcr.io/v2/homebrew/core/httpd/blobs/sha256:e6ebcb4a1307
Already downloaded: /Users/johnny/Library/Caches/Homebrew/downloads/8506f199d5d7def536481d6fa87aa94c25201b57072d032e97edb8ce78fa86a3--httpd--2.4.49.arm64_big_sur.bottle.tar.gz
==> Pouring httpd--2.4.49.arm64_big_sur.bottle.tar.gz
==> Caveats
DocumentRoot is /opt/homebrew/var/www.
The default ports have been set in /opt/homebrew/etc/httpd/httpd.conf to 8080 and in
/opt/homebrew/etc/httpd/extra/httpd-ssl.conf to 8443 so that httpd can run without sudo.
To restart httpd after an upgrade:
brew services restart httpd
Or, if you don't want/need a background service you can just run:
/opt/homebrew/opt/httpd/bin/httpd -D FOREGROUND
==> Summary
🍺 /opt/homebrew/Cellar/httpd/2.4.49: 1,660 files, 31.9MB
➜ ~
[Edited and updated]
I am using macOS Catalina 10.15.7 / Xcode-select version 2373 and by running $ brew install httpd I can install the Apache service under /usr/local/
Can you please remove and install it again by using the same command and sharing all the output?
These are the important things:
DocumentRoot is /usr/local/var/www.
The default ports have been set in /usr/local/etc/httpd/httpd.conf to 8080 and in
/usr/local/etc/httpd/extra/httpd-ssl.conf to 8443 so that httpd can run without sudo.
To start httpd:
brew services start httpd
Or, if you don't want/need a background service you can just run:
/usr/local/opt/httpd/bin/httpd -D FOREGROUND
This is the list of directories.
$ ls -lrt /usr/local/etc/httpd
total 200
drwxr-xr-x 14 user admin 448 Sep 22 23:35 extra
-rw-r--r-- 1 user admin 21222 Sep 22 23:35 httpd.conf
-rw-r--r-- 1 user admin 13064 Sep 22 23:35 magic
-rw-r--r-- 1 user admin 60847 Sep 22 23:35 mime.types
drwxr-xr-x 4 user admin 128 Sep 22 23:35 original
And this is my test showing it is working.
$ curl localhost:8080
<html><body><h1>It works!</h1></body></html>
$ tail -f /usr/local//var/log/httpd/access_log
::1 - - [22/Sep/2021:23:39:35 -0500] "GET / HTTP/1.1" 200 45
Based on your output and the brew documentation I believe you are using Apple Silicon is that correct, can you confirm?
https://docs.brew.sh/Installation
This script installs Homebrew to its preferred prefix (/usr/local for macOS Intel, /opt/homebrew for Apple Silicon, and /home/linuxbrew/.linuxbrew for Linux) so that you don’t need sudo when you brew install. It is a careful script; it can be run even if you have stuff installed in the preferred prefix already. It tells you exactly what it will do before it does it too. You have to confirm everything it will do before it starts.

Q: E: Error reading the CPU table in ubuntu

I can not update or upgrade my computer by terminal.
Show the error message E: Error reading the CPU table.
how can I fix it ?
In my case some file system corruption caused files in the directory /usr/share/dpkg/ and the file /usr/share/dpkg/cputable wasn't found
to be deleted, concurrent access during some other operation?
rsync / copying ( in my case rsync'ing ) from a known good installation appears to have fixed this particular problem
possible alternative boot from a livecd and mount the target disk and copy
The command strace apt update showed the location of the missing file
In My case I have faced with corrupted dpkg installation and at the end it ended up with “E: Error reading the CPU table” even I enter sudo apt-get update
Step to fix it.
1) Create tempory location for dpkg and download relavent .deb file
sudo mkdir /tmp/dpkg then
cd tmp/dpkg
2) Downlod .deb file with relavent .deb file in http://archive.ubuntu.com/ubuntu/pool/main/d/dpkg/
sudo wget http://archive.ubuntu.com/ubuntu/pool/main/d/dpkg/dpkg_1.19.0.5ubuntu5.1_amd64.deb
3) Manually go to that downloaded directory and extract the .deb file(GDebi installation might not work as well)
Navigate to <Your_Extracted_Folder>/usr/share/dpkg and now you can
see the set of files with cputable file. We need to copy these files
to /usr/share/dpkg
4) Check /usr/share/ having dpkg folder if not create it with
sudo mkdir /usr/share/dpkg
5) cd <Your_Extracted_Folder>/usr/share/dpkg
6) sudo cp * /usr/share/dpkg
Now you can see all the files and it will fix the “E: Error reading the CPU table”
Check if you have (unintentionally) "overwritten" your /usr/share directory with a file..
ls -l /usr/
-rwxr-xr-x 1 root root 1254 Feb 4 19:08 share*
drwxr-xr-x 193 root root 4096 Dec 10 09:09 share~/
I solved it by moving the file share* to share_bak and then renaming the directory share~ (to share). Problem solved!

How do I properly configure glassfish 4 to work with gurobi's shared library?

Error:
java.lang.UnsatisfiedLinkError: /opt/gurobi600/linux64/lib/libGurobiJni60.so: libgurobi60.so: cannot open shared object
It is getting the path correct when I add it via JVM settings, for some reason it doesn't find it if only relying on LD_LIBRARY_PATH environment variable though. Either way it has trouble with the libgurobi60.so. I tried adding all of this to glassfish_home/domains/domain1/lib/applibs and ext to no avail.
Here are the permissions for /opt/gurobi600/linux64/lib
-rw-r--r-- gurobi.jar
lrwxrwxrwx libgurobi60.so -> ./libgurobi.so.6.0.0
lrwxrwxrwx libgurobi_c++.a -> ./libgurobi_g++4.2.a
-rw-r--r-- libgurobi_g++4.1.a
-rw-r--r-- libgurobi_g++4.2.a
-rwxr-xr-x libGurobiJni60.so
-rwxrwxrwx libgurobi.so.6.0.0
I had this working on my previous server running ubuntu 12.04, this is now on 14.04. Previously copying the .so file to /usr/local/bin seemed to fix the issue, but this does not work on the new server.
Running the following two commands fixed it:
echo "/opt/gurobi600/linux64/lib" | sudo tee /etc/ld.so.conf.d/gurobi.conf
sudo ldconfig

make: *** /lib/modules/2.6.32-279.el6.x86_64/build: No such file or directory. Stop

I downloaded the RALINK driver from their web site
untar -xvf rtl*
and then i ran "make" in it. google search suggested "kernel-devel"
needed to be installed.
i installed the kernel-devel package but i still get this error
make: *** /lib/modules/2.6.32-279.el6.x86_64/build: No such file or directory. Stop.
when i check to see if that file exists..
i cd into /lib/modules/2.6.32-279.el6.x86_64/
i believe this error happens right after "make" command tries to execute this command
make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/2.6.32-279.el6.x86_64/build M=/home/a/Desktop/3/rtl8712_8188_8191_8192SU_usb_linux_v2.6.6.0.20120405 modules
and it's there it is called "build"
so why is it saying no such file or directory ?
**EDIT**
If your problem is like the one I was having (see below), it seems the kernel development package isn't installed.
Try:
yum install kernel-devel
Original Message
I am having the same problem. But, interestingly, when I ls-l on the parent directory to the "missing directory" (so, ls -l /lib/modules/2.6.32-431.el6.x86_64/) it shows that build is a broken link pointing to /usr/src/kernels/2.6.32-431.el6.x86_64, but /usr/src/kernels/ is empty.
So, I don't know if this is much help, but hopefully it gives someone else a better idea of what's wrong.
[root#xx libreswan-3.7]# ls -l /lib/modules/2.6.32-431.el6.x86_64/
total 3524
lrwxrwxrwx. 1 root root 46 Dec 12 13:42 build -> ../../../usr/src/kernels/2.6.32-431.el6.x86_64
drwxr-xr-x. 2 root root 4096 Nov 21 22:41 extra
drwxr-xr-x. 11 root root 4096 Dec 12 13:42 kernel
-rw-r--r--. 1 root root 589679 Dec 12 13:43 modules.alias
...
-rw-r--r--. 1 root root 851070 Dec 12 13:43 modules.usbmap
lrwxrwxrwx. 1 root root 5 Dec 12 13:42 source -> build
drwxr-xr-x. 2 root root 4096 Nov 21 22:41 updates
drwxr-xr-x. 2 root root 4096 Dec 12 13:42 vdso
drwxr-xr-x. 2 root root 4096 Nov 21 22:41 weak-updates
[root#xx libreswan-3.7]# ls /usr/src/kernels/
[root#xx libreswan-3.7]#
Notice that the "source" link is also broken because it points to build.
cd /lib/modules/2.6.32-431.el6.x86_64
sudo rm build
sudo ln -s ../../../usr/src/kernels/2.6.32-431.29.2.el6.x86_64/ build
The above commands fixed the issue for me
But basically you must be able to use any version of 2.6.32* directory in the last command.
Thanks to Nighthawk663.
I have the same problem in ./configure --with-linux=/lib/modules/uname -r/build/. It says "not a file..." too.
Reason:
The kernel head files are missing for the current kernel.
How I solved it:
find current kernel: uname -r
yum install kernel-devel-$(uname -r)
you may not find it...
just google that version of kernel-devel-... download the rpm file, and do
rpm -i kernel-devel-xxxx.rpm
Then it works for me!
/usr/lib/modules/your-kernel-edition/build is a link file.
the link file exists. but the target file might not exists. So It is ok to see the link file, but the folder can not be changed into it (cd).
Similar Example on fedora 29.
lrwxrwxrwx. 1 root root 40 Oct 21 07:38 /usr/lib/modules/4.18.16-300.fc29.x86_64/build -> /usr/src/kernels/4.18.16-300.fc29.x86_64
Just install kernel-devel.
Example.
sudo dnf install kernel-devel-$(uname -r)
Because the link is not with your kernel version.
Delete the wrong link.
$ rm build`
Use $ uname -r to check the kernel version
Build new link with your kernel version.
$ ln -s ../../../usr/src/kernels/($(uname -r)/ build
Done

How can I mount an S3 volume with proper permissions using FUSE

I have an Amazon S3 bucket (let's call it static.example.com) that I need to mount on an EC2 instance (Ubuntu 12.04.2). I've installed s3fs. I'm able to mount the volume, but I can't write to the bucket. I have tried:
sudo s3fs static.example.com -o use_cache=/tmp,allow_other,uid=33,gid=33 /mnt/static.example.com
I can then cd /mnt and ls -la to see:
drwxr-xr-x 5 root root 4096 Mar 28 18:03 .
drwxr-xr-x 25 root root 4096 Feb 19 19:22 ..
lrwxrwxrwx 1 root root 7 Feb 21 19:19 httpd -> /httpd/
drwx------ 2 root root 16384 Oct 9 2012 lost+found
drwxr-xr-x 1 www-data www-data 0 Jan 1 1970 static.example.com
This all looks good, but when I cd static.example.com and mkdir test, I get:
mkdir: cannot create directory `test': Permission denied
The only way I can actually create a directory or touch a file is to force it with sudo. This is not a viable option, however, because I want to write files to the bucket from Apache. My Apache server runs as user:group www-data. Running mount yields:
s3fs on /mnt/static.example.com type fuse.s3fs (rw,nosuid,nodev,allow_other)
How can I mount this bucket in a manner that will allow me to write to the bucket?
I'm the lead developer and maintainer of Open source project RioFS: a userspace filesystem to mount Amazon S3 buckets.
Our project is an alternative to “s3fs” project, main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “beta” state, but it's been running on several high-loaded fileservers for quite some time.
We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.
Regarding your issue:
if'd you use RioFS, you could mount a bucket and have a write access to it using the following command (assuming you have installed RioFS and have exported AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables):
riofs -o allow_other http://s3.amazonaws.com bucket_name /mnt/static.example.com
(please refer to project description for command line arguments)
Please note that the project is still in the development, there are could be still a number of bugs left.
If you find that something doesn't work as expected: please fill a issue report on the project's GitHub page.
Hope it helps and we are looking forward to seeing you joined our community !
This works for me:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache
If you need to debug, just add ,f2 -f -d:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache,f2 -f -d
Try this method using S3Backer:
mountpoint/
file # (e.g., can be used as a virtual loopback)
stats # human readable statistics
Read more about it hurr:
http://www.turnkeylinux.org/blog/exploring-s3-based-filesystems-s3fs-and-s3backer