composer update "process killed" - process

I tried to execute:
composer.phar update
And received:
Fatal error: Allowed memory size of 94371840 bytes exhausted (tried to allocate 71 bytes) in phar:///home/xxxxxxx/bin/composer.phar/src/Composer/DependencyResolver/RuleSetGenerator.php on line 123
The xxxxxxx is the user.
And then I tried to execute:
php -d memory_limit=256M ~/bin/composer.phar update
And:
php -d memory_limit=512M ~/bin/composer.phar update
Then I received this:
Yikes! One of your processes (php, pid 14331) was just killed for excessive resource usage.
Please contact DreamHost Support for details.**
How can I execute composer update on Dreamhost shared host? Can someone who had experienced this situation could help me please?
The context: Laravel 4

Run the composer update command on your development machine, which generates the composer.lock file for you. Upload that composer.lock file and on the shared host just run composer install. This will use a lot less memory!

It happens for a lack of memory of your server.
You can install the package in your local machine
Then replace your server composer.lock file with the local's composer.lock file
(or push the composer.lock file from local and pull the composer.lock file in the server)
Then go to the terminal and run composer update or composer install.

Related

Ubuntu Server Backup and Restore via tar

I'm trying to learn how to backup and restore my Ubuntu Server via tar so I know that I have a safe system. After I untar and reboot, I have several issues, but they seem to be caused by a read-only file system. The source and destination server are both Ubuntu Server on the same version, 18.04.05 LTS. The source server is a VPS that has 6 GB RAM and 4vCPUs. The destination server is a VM on my FreeNAS machine with 6GB RAM and 2 vCPUs.
The primary applications that need to work are my Graylog server and Nagios server. I've mostly followed the instructions at Ubuntu.
First, my tar command is:
sudo tar -c --use-compress-program=pigz -f backup.tar.gz --exclude=/backup.tar.gz --exclude=/dev --exclude=/usr --exclude=/sbin --exclude=/proc --exclude=/sys --exclude=/tmp --exclude=/run --exclude=/mnt --exclude=/media --exclude=/lost+found --exclude=/home/*/.cache --exclude=/home/*/.gvfs --exclude=/home/*/.local/share/Trash --exclude=/var/log --exclude=/var/cache/apt/archives --exclude=/usr/src/linux-headers* --one-file-system /
I use pigz to utilize the VPS's 4 vCPUs to take less time. I transfer this to my VM which as a fresh copy of Ubuntu Server 18.04.05 and untar with:
sudo tar -xvpzf backup.tar.gz -C / --numeric-owner
After I reboot, I get the following as soon as I boot:
Unable to setup logging. [Errno 30] Read-only file system: '/var/log/landscape/sysinfo.log'
run-parts: /etc/update-motd.d/50-lanscape-sysinfo exited with return code 1
mktemp: failed to create file via template '/var/lib/update-notifier/tmp.XXXXXXXXXX': Read-only file system
run-parts: /etc/update-motd.d/95-hwe-eol exited with return code 1
/usr/lib/update-notifier/update-motd-fsck-at-reboot: 33: /usr/lib/update-motd-fsck-at-reboot: cannot create /var/lib/update-notifier/fsck-at-reboot: Read-only file system
I do see that some areas of the system do work like the original source. My SSH port changes, hostname changes, etc. But I get these above errors and my Graylog and Nagios servers do not work.
So I'm wondering where I went wrong in my process and any help would be appreciated. The source is a live server with backups so I'm safe there. I'm just making sure I have my ducks in a row for the future.

Using "Remote SSH" in VSCode on a target machine that only allows inbound SSH connections

Is there a way to use the VSCode Remote SSH extension to interact with a remote host that does not allow outbound internet connections?
Is it possible to download the vscode-server files from another system and copy to host?
I read this but I can't connect the server to internet.
When you connect to a host it executes a bash script that wgets or curls a tarball and extracts it in a directory in your home directory. Here's an offline workaround.
Attempt to connect, let it fail
On server, get the commit id
$ ls ~/.vscode-server/bin
553cfb2c2205db5f15f3ee8395bbd5cf066d357d
Download tarball replacing $COMMIT_ID with the the commit number from the previous step
For Stable Version
https://update.code.visualstudio.com/commit:$COMMIT_ID/server-linux-x64/stable
For Insider Version
https://update.code.visualstudio.com/commit:$COMMIT_ID/server-linux-x64/insider
Move tarball to ~/.vscode-server/bin/$COMMIT_ID/vscode-server-linux-x64.tar.gz
Extract tarball in this directory
$ cd ~/.vscode-server/bin/$COMMIT_ID
$ tar -xvzf vscode-server-linux-x64.tar.gz --strip-components 1
Connect again
You'll still need to install any extensions manually. There's a download button next to all the extensions in the marketplace. Once you have the .vsix file you can install them through the GUI with the Install from VSIX option in the extensions manager.
This is kind of a pain and hopefully they improve this process, but if you have a network-based home directory, you only have to do this once.
open vscode -> about
Version: 1.46.1
Commit: cd9ea6488829f560dc949a8b2fb789f3cdc05f5d
Date: 2020-06-17T21:17:14.222Z
Electron: 7.3.1
Chrome: 78.0.3904.130
Node.js: 12.8.1
V8: 7.8.279.23-electron.0
OS: Darwin x64 17.7.0
$COMMIT_ID = cd9ea6488829f560dc949a8b2fb789f3cdc05f5d
A new feature is being added to support offline install
However, you can now solve this issue by a new user setting in the Remote - SSH extension. If you enable the setting remote.SSH.allowLocalServerDownload, the extension will install the VS Code Server on the client first and then copy it over to the server via SCP.
Note: This is currently an experimental feature but will be turned on by default in the next release
https://code.visualstudio.com/blogs/2019/10/03/remote-ssh-tips-and-tricks
A a work around I have done the following:
Desktop ~/.ssh/config
...
Host *
RemoteForward 54321
...
Remote: ~/bin/wget in which ~/bin is added to PATH via .bashrc
#!/bin/bash
export LD_LIBRARY_PATH=$HOME/opt/lib/tsocks/
export TSOCKS_CONF_FILE=$HOME/opt/tsocks/tsocks.conf
$HOME/bin/tsocks /usr/bin/wget $#
Remote: ~/opt/tsocks/tsocks.conf
server = 127.0.0.1
server_port = 54321
server_type = 5
note tsocks binary has been scp-ed to ~/bin/tsocks and ~/opt/tsocks/ has been created with libtsocks.so which is normally stored in /usr/lib64/libtsocks.so
This is a work around that allows me to have wget functionality with out messing with anything outside my profile to get it to work (eg: no root required ... even though I have it).
Current Version of VS Code: 1.48.2
I just kill the wget process on the server end, and let the client download the archive and transfer it to the server end. That's quite easy as below.
make sure that you set in settings.json
"remote.SSH.allowLocalServerDownload": true,
execute the shell scrpits below.
# to find the <pid>
ps aux | grep wget | grep vscode-server
# kill the process
kill -9 <pid>
# then wait for the client downloading and transferring
# optional: If you want to know the progress, just
cd ~/.vscode-server/bin/<commit-id>/
watch -n 1 -d ls -rthl

Getting "chmod(): Operation not permitted" on "composer update"

When I run 'composer update' I get this error:
Writing lock file
Generating autoload files
[ErrorException]
chmod(): Operation not permitted
*It works just fine with sudo, but then I have to reset the owner & permissions, Which is really annoying...
**I also tried to reset the owner of ~/.composer to www-data with 777, no effect.
***I'm using Ubuntu 16.04 LTS + Apache/2.4.18 & php7.0.26
Any idea?
chmod will only work without sudo if the owner of the file is the same as the one running the composer update command.
The problem is that the error message doesn't tell you which file it's trying to chmod.
This depends on the project.
Running the command in verbose mode will give you more details:
composer update -v
In my case, it gave me a stack trace, showing which file called chmod(), and the line number.
However, it didn't give me the path of the file passed to chmod().
I had to add a simple echo right before the call to chmod() (without forgetting to remove it afterwards).
Once you know which file/folder is responsible for the error message, change its owner with chown.
In my case (Magento 2.3), the culprit was the bin/magento file, which needs to be owned by the user running the composer commands.

Security plugin in Local yum repository

I've created a local yum repository for RHEL 7 on a separate server. Then I used the "reposync" command to get the packages from RHN.
reposync --download-metadata --gpgcheck -l --repoid=rhui-REGION-rhel-server-rh-common --download_path=/rhel_security_repo/
reposync --download-metadata --gpgcheck -l --repoid=rhui-REGION-rhel-server-optional --download_path=/rhel_security_repo/
reposync --download-metadata --gpgcheck -l --repoid=rhui-REGION-rhel-server-releases --download_path=/rhel_security_repo/
After that, I executed the following command to create my repo:
createrepo --database /rhel_security_repo/
The repository was created successfully with over than 9000 packages as expected. What I am trying to do now is to have other boxes using this local repository. I've created a yum config file in the other boxes where the baseurl points to this server with the local yum repository.
[security-updates-rhel7]
name=Repository for RHEL7 security updates
baseurl=ip-server
enabled=1
gpgcheck=1
All the servers are able to talk to this server with the local yum repo and they can install packages from it.
The problem is I can't update packages when I run yum update --security:
Example:
yum --disablerepo="*" --enablerepo="security-updates-rhel7" --security update mariadb-libs
Loaded plugins: amazon-id, rhui-lb
--> 1:mariadb-libs-5.5.37-1.el7_0.x86_64 from #rhui-REGION-rhel-server-releases removed (updateinfo)
--> 1:mariadb-libs-5.5.40-2.el7_0.x86_64 from security-updates-rhel7 removed (updateinfo)
No packages needed for security; 1 packages available
Resolving Dependencies
However, if I run the command without --security, I can see available updates:
yum --disablerepo="*" --enablerepo="security-updates-rhel7" update mariadb-libs
Loaded plugins: amazon-id, rhui-lb
Resolving Dependencies
--> Running transaction check
---> Package mariadb-libs.x86_64 1:5.5.37-1.el7_0 will be updated
---> Package mariadb-libs.x86_64 1:5.5.40-2.el7_0 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================
Updating:
mariadb-libs x86_64 1:5.5.40-2.el7_0 security-updates-rhel7 753 k
Transaction Summary
==============================================================================================================================
Upgrade 1 Package
Total download size: 753 k
Is this ok [y/d/N]:
It seems I lost the security metadata when I did the reposync.
Any ideas what the problem could be?
Etan has the right idea; those are the two ways we found to get around the issue with RHEL6. You could pull the Redhat metadata straight out of your yum cache and copy it into your local repo, and that works 90% of the time... but 10% it will give you random failures if Redhat happens to be updating a repository while you are syncing it.
Redhat has a guide for how to graft security metadata into your local repo in RHEL5/6, I assume it works similarly in RHEL7. If you have a Redhat Support account, see: https://access.redhat.com/solutions/55654
If you don't, below is my own take on it:
Run your reposync command with --download-metadata and other trimmings, but I would start with one repo at a time and put each one in it's own directory similar to how Redhat does it. ex. mkdir -p /path/to/repo-id && reposync -l -n --download-metadata -r repo-id -p /path/to/repo-id/.
Pull the abcdefghij-updateinfo.xml.gz metadata files from Redhat, which contains security metadata for each repository. Do this by running yum list-sec and then look for it in your local yum cache, under each repository's subdirectory. Probably somewhere in /var/cache/yum/arch/7Server/repo-id.
Run createrepo on just that repository. createrepo -v /path/to/repo-id/
Go into /path/to/repo-id/ and then into the repodata subfolder. Copy in the abcdefghij-updateinfo.xml.gz from your local yum cache into the repodata folder, but rename it to remove the hash at the beginning. Left with a file called updateinfo.xml.gz
Use the modifyrepo command to insert the security metadata into that repo's table of contents (repomd.xml) file.
modifyrepo /path/to/repo-id/updateinfo.xml.gz /path/to/repo-id/

dotcloud push on cygwin fails with "rsync error: unexplained error (code 255)" (similar with git and hg)

Though I have followed the usual steps for using the dotCloud CLI under Cygwin, dotcloud push fails in all cases: --rsync, --hg, and --git.
I am on Windows 8 and Cygwin.
How can I push successfully?
Sample output:
me#host /cygdrive/d/project
$ dotcloud push --rsync
==> Pushing code with rsync from "./" to application myapp
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at /home/lapo/package/rsync-3.0.9-1/src/rsync-3.0.9/io.c(605) [sender=3.0.9]
me#host /cygdrive/d/project
$ dotcloud push --git
Permission denied (publickey,password).r from "./" to application myapp
fatal: The remote end hung up unexpectedly
me#host /cygdrive/d/project
$ dotcloud push --hg
==> Pushing code with mercurial from "./" to application myapp
abort: no suitable response from remote hg!
Error: Mercurial returned a fatal error
You may be running into a bug in Cygwin's group permissions. Vineet Gupta gives a workaround in his blog. The problem comes from the very strict permissions expected by ssh around the keys, and the solution is to set the permission on the ssh key properly (to 600, rw by owner only). Cygwin seems to need the group to be added manually.
Updating the steps to get the dotCloud CLI installed, including setting the permissions, leads to:
Start the Cygwin Setup.
Select default choices until you reach the package selection dialog.
Enable the following packages:
net/openssh
net/rsync
devel/git
devel/mercurial
python/python (make sure it’s at least 2.6!)
web/wget
After the installation, you should have a Cygwin icon on your desktop. Start it: you will get a command-line shell.
Download easy_install
wget http://peak.telecommunity.com/dist/ez_setup.py
Install easy_install
python ez_setup.py
You now have easy_install; let’s use it to install pip:
easy_install pip
Now install dotcloud (the CLI)
pip install dotcloud
Set up the CLI with your credentials. This will also download the ssh key.
dotcloud setup
New Step Update the permissions on your dotCloud key:
chgrp Users ~/.dotcloud_cli/dotcloud.key
chmod 600 ~/.dotcloud_cli/dotcloud.key
Now you should be able to dotcloud push
If you have multiple dotCloud accounts, then you will need to repeat this process for each account, since each account has its own key. Also note that you shouldn't have to set these permissions manually, but it seems like the group ownership is sometimes the wrong default in Cygwin. Linux and OSX don't seem to show this problem, though the permissions must be 600 for all OSes, so it is worth checking.