SOLVED: `gatsby develop` yields Error: EISDIR: illegal operation on a directory, readlink on the .cache folder - npm

Solution
My HDD is formatted as exFat which doen'nt seem to support the readlink operation.
I am trying to set up a gatsby page via the gatsby CLI. The 'gatsby new' wokrs and set's up everything nicely, as far as I can tell. When it comes to starting the develop server I get the Error like in the title: "Error: EISDIR: illegal operation on a directory, readlink './path/on/my/hdd/.cache'
I learned, that "EISDIR means that the target of the operation is a directory in reality but that the expected filetype of the target is something other than a directory." (Using Node.js I get, "Error: EISDIR, read").
What drives me mad is, that a file called .cache wouldn't make much sense, so somewhere deep down in this gatsby jungle has to be something, that gets messed up.
I've tryed node Versions 10.16.0 and 12.2.0 managed via nvm on windows 10. I've tryed different Folders and Harddrives, I've force cleared my node cache, I've tryed different starter packages, I've tryed npm update on the node_modules installed via gatsby.
PS > gatsby develop
success open and validate gatsby-configs - 0.075 s
success load plugins - 9.615 s
success onPreInit - 0.003 s
success initialize cache - 0.074 s
ERROR
Unable to copy site files to .cache EISDIR: illegal operation on a directory, readlink 'C:\my-gatsby-website\.cache'

Related

CodeDeploy not properly copying code from GitHub

I have CodeDeploy pull code from my GitHub repo. In the deployment Commit ID (for GitHub) I have specified the Commit ID that I want to deploy. My repo has the following structure:
my-service/
README.md
.gitignore
scripts/
deploy.sh
src/
<lots of code here>
pm2.dev.json
appspec.yml
My appspec.yml file looks like:
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu
hooks:
BeforeInstall:
- location: scripts/deploy.sh
timeout: 300
runas: root
My scripts/deploy.sh looks like:
sudo npm install pm2 -g
pwd
pm2 start /home/ubuntu/my-service/pm2.dev.json
When I run the CodeDeployment deployment for this, it fails with the following error:
Script at specified location: scripts/deploy.sh run as user root failed with exit code 1
When I look at the logs I see:
LifecycleEvent - BeforeInstall
Script - scripts/deploy.sh
[stderr]npm WARN config global `--global`, `--local` are deprecated. Use `--location=global` instead.
[stdout]changed 182 packages, and audited 183 packages in 8s
[stdout]
[stdout]12 packages are looking for funding
[stdout] run `npm fund` for details
[stdout]
[stdout]found 0 vulnerabilities
[stdout]/opt/codedeploy-agent
[stderr][PM2][ERROR] File /home/ubuntu/my-service/pm2.dev.json not found
Sure enough when I look in /home/ubuntu/my-service, I do not see a pm2.dev.json file, because this server had been manually configured several weeks ago before a pm2.dev.json file was added to the project. I would have expected CodeDeploy to have written whats in the repo to the server under /home/ubuntu.
Can anyone spot anything wrong with my appspec.yml or other configuration? Could it be a bad GitHub setup?
Had to change BeforeInstall to Install.
BeforeInstall runs before it copies over the source code (specified under files/source). But Install run just after that copy occurs, hence files will be available on the file system.

How do I install Radare2 on Windows?

I am trying to get Radare2 installed on my Windows machine. I do have Windows Subsystem for Linux up and running if that changes things. I have tried the git technique from their website:
git clone https://github.com/radare/radare2
cd radare2
sys/install.sh
This did strange things depending on what I did. There are some comments headed with the # symbol that explain what's going on.
#-----Here I clone the repo.
PS [*****] C:\Users\*****\AppData\Local\Programs> git clone https://github.com/radare/radare2
Cloning into 'radare2'...
remote: Enumerating objects: 81, done.
remote: Counting objects: 100% (81/81), done.
remote: Compressing objects: 100% (71/71), done.
remote: Total 215078 (delta 27), reused 17 (delta 10), pack-reused 214997
Receiving objects: 100% (215078/215078), 117.53 MiB | 817.00 KiB/s, done.
Resolving deltas: 100% (164658/164658), done.
Updating files: 100% (3934/3934), done.
#-----Here I cd into the new repo and run the install script.
PS [*****] C:\Users\*****\AppData\Local\Programs> cd radare2
#-----This next command opened a new window, which disappeared immediately.
PS [*****] C:\Users\*****\AppData\Local\Programs\radare2> sys/install.sh
#-----Calling bash and passing the script yielded some nice errors.
PS [*****] C:\Users\*****\AppData\Local\Programs\radare2> bash sys/install.sh
sys/install.sh: line 2: $'\r': command not found
: ambiguous redirect 4: 1
sys/install.sh: line 6: $'\r': command not found
sys/install.sh: line 11: syntax error near unexpected token `$'in\r''
'ys/install.sh: line 11: ` case "$1" in
#-----Here I fired up my WSL Ubuntu system and tried to run the script.
PS [*****] C:\Users\*****\AppData\Local\Programs\radare2> wsl
*****#DESKTOP-6L7K90U:/mnt/c/Users/*****/AppData/Local/Programs/radare2$ sys/install.sh
: not found.sh: 2:
sys/install.sh: 5: Syntax error: Bad fd number
*****#DESKTOP-6L7K90U:/mnt/c/Users/*****/AppData/Local/Programs/radare2$
At this point, I decided to try and use the Windows binary instead. I went to the download page and downloaded the windows binary, then unpacked it into my AppData programs folder. I then opened that folder and double-clicked on radare2.exe. This made a quick blip on the taskbar like a window was trying to open, which also immediately closed.
At this point, I suspect errors in the source code for Radare2 are causing it to crash almost immediately. Is this the case? Or do I need to do something different to get this running?
-----Solved-----
I went and experimented a little, including installing to a Linux VM using the git clone method. I have found that the windows binary is the way to go for this. to use it, unpack the downloaded binary, then open CMD/PowerShell in the radare2 directory, then run bin/radare2.exe or bin/r2.bat. You will need to manually add these to the path, though.

Installing Tensorflow GPU/CUDA dependencies on a machine with no internet access

I have 2 machines -
dccten1a with no internet access where I need to install Tensorflow with GPU support
dccten1b with internet access so that I can download packages and transfer to dccten1a
In the final step of installing Tensorflow, when running the bazel build command to produce a whl file, I get an error which says that it can't find a file in a folder it is looking in, and also cannot download, obviously, as 1a doesn't have internet access.
bazel build --config=opt --config=cuda /home/tensorflow/Documents/tf_dependencies/tensorflow-master/tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0"
ERROR: error loading package '': Encountered error while reading extension file 'closure/defs.bzl': no such package '#io_bazel_rules_closure//closure': Error downloading [http://bazel-mirror.storage.googleapis.com/github.com/bazelbuild/rules_closure/archive/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz, https://github.com/bazelbuild/rules_closure/archive/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz] to /home/xyzuser/.cache/bazel/_bazel_xyzuser/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz: All mirrors are down: [Unknown host: github.com, Unknown host: mirror.bazel.build]
I checked in the system, and there is no such directory as shown in the error message (i.e., /home/xyzuser/.cache/bazel/_bazel_xyzuser/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure/). So, I created it, searched and found the requisite (?) file online, downloaded the file in the machine with internet, transferred it to the target machine, moved the file to the just created directory, and tried running the command again:
(tensorflow#dccten1a):
mkdir -p /home/tensorflow/.cache/bazel/_bazel_tensorflow/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure
(tensorflow#dccten1b):
http://bazel-mirror.storage.googleapis.com/github.com/bazelbuild/rules_closure/archive/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz
sudo scp -r /home/tensorflow/Downloads/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz tensorflow#160.88.114.17:/home/tensorflow/Documents/tf_dependencies
(tensorflow#dccten1a):
mv /home/tensorflow/Documents/tf_dependencies/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz /home/tensorflow/.cache/bazel/_bazel_tensorflow/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure
Then I run the bazel build command again, but the same error persists.
Use --experimental_repository_cache to download the dependencies on the machine with internet access, transfer the cache to the machine without internet access, and use --experimental_repository_cache to refer to the same cache.
e.g.
1) On the machine with internet access, run
tensorflow#dccten1b $ bazel build --experimental_repository_cache=/path/to/some/folder --config=opt --config=cuda /home/tensorflow/Documents/tf_dependencies/tensorflow-master/tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0""
2) Copy the cache at /path/to/some/folder to the machine without internet access using a SD card or flash drive.
3) On the machine without internet access, run the same command again and setting the flag to the cache's location.
tensorflow#dccten1a $ bazel build --experimental_repository_cache=/path/to/some/folder --config=opt --config=cuda /home/tensorflow/Documents/tf_dependencies/tensorflow-master/tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0""

cloud VM instance broken packages after updating packages to earlier version

I did a apt-get upgrade because the load times of our production server were about 40 seconds. I don't have a snapshot before nor after the upgrade.(Although there is a snapshot of six months old) Load times improved to 15-ish seconds but our erizo service stopped working. Erizo was also running on that instance. Restarting the services didn't help so I tried upgrading the packages to the previous version (https://askubuntu.com/questions/138284/how-to-downgrade-a-package-via-apt-get), just like it was but on almost every package there was an error: the previous package version did not excist.(which is strange, because I copied the output of dpkg -l)
Only a few of them were successfully downgraded but I got a serious error when upgrading e1fslibs to it's previous version.:The following packages have unmet dependencies:
e2fsprogs: PreDepends: e2fslibs
Somehow that messed up initramfs and/or initramfs-tools and now the instance is running but I can't get into it.
Connecting to the instance in google cloud platform :Connecting...
Could not connect, retrying (1/3).
google cloud shell isn't able to gcloud compute ssh : Permission denied (publickey).
using gcloud locally also says Permission denied (publickey).
I checked the following:
There are project public keys defined; there aren't any instance public keys defined or any other metadata ( Google Cloud SSH Keys )
In google cloud platform >> compute engine >> VM instances >> permissions>> I see 'compute' is disabled
verify that the daemon is running by navigating to the serial console output page and looking for output lines prefixed with the accounts-from-metadata: string. If you are using a standard image but you do not see these output prefixes in the serial console output, the daemon might be stopped--> I don't see this so I expect it's NOT running.
check firewall rules:(gcloud compute firewall-rules list)
default-allow-ssh default 0.0.0.0/0 tcp:22 //rule is present
Following packages were upgraded:
apt
apt-transport-https
apt-utils
binutils
cloud-init
cloud-initramfs-growroot
cloud-initramfs-rescuevol
comerr-dev
dosfstools
e2fslibs
e2fsprogs
gce-cloud-config
gce-daemon
gce-imagebundle
gce-startup-scripts
google-cloud-sdk
landscape-client
landscape-common l
ibapt-inst1.4 libapt-pkg4.12
libcomerr2
libss2
libudev0 mountall
nginx
nginx-common
nginx-full
ntp
ntpdate
procps
python-apt
python-apt-common
python-lazr.restfulclient
udev
unattended-upgrades
update-manager-core
upstart
whoopsie
x11-utils
This is get from the serial output ::
- mountall: Event failed
- landscape-client is not configured, please run landscape-config.
What to do next?
Apply a startup script to running instance (following this https://cloud.google.com/compute/docs/startupscript) and try to perform Apt-get upgrade ?
try to create a new public key (again) in google cloud shell to access the instance?
In google cloud shell the first time this file was generated after typing gcloud compute --project "enduring-palace-762" ssh --zone "europe-west1-c" "tta-media-test-2"
WARNING: The private SSH key file for Google Compute Engine does not exist.WARNING: You do not have an SSH key for Google Compute Engine.WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key. This tool needs to create the directory /home/developer/.ssh
the generated public key was stored in /home/developer/.ssh /google_compute_engine.pub I made a copy of that, prepended the username and added the content of the public key to compute engine >> metadata>>ssh keys. *key is accepted but the username doesn't show like it does with all the other username - key pairs
I get Permission denied (publickey) error though when using gcloud compute ssh tta-media-test-2 --zone europe-west1-c
When I provide the ssh key file like this
gcloud compute ssh tta-media-test-2 --zone europe-west1-c --ssh-key-file=my-ssh-keys_copy.pub (pwd is inside the folder where key file is)
WARNING: The public SSH key file for Google Compute Engine does not exist.
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
I get same result when i generate a new key with ssh-keygen -t rsa -f my-ssh-keys
Any other possible solution would be much appreciated.
[update] I am able to ssh the 'broken' instance from local using ssh user#externalIpOfInstance My plan is to bring it to a upgraded stable state, create a snapshot and see from there..
sudo apt-get -f install
0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up initramfs-tools (0.99ubuntu13.5) ...
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.13.0-79-generic
E: /usr/share/initramfs-tools/hooks/fixrtc failed with return 1.
update-initramfs: failed for /boot/initrd.img-3.13.0-79-generic with 1.
dpkg: error processing initramfs-tools (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
sudo apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
google-chrome-stable
The following packages will be upgraded:
comerr-dev libcomerr2 libss2 unattended-upgrades
4 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
1 not fully installed or removed.
Need to get 0 B/188 kB of archives.
After this operation, 4,096 B of additional disk space will be used.
Do you want to continue [Y/n]? y
Preconfiguring packages ...
(Reading database ... 178509 files and directories currently installed.)
Preparing to replace comerr-dev 2.1-1.42-1ubuntu2.2 (using .../comerr-dev_2.1-1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement comerr-dev ...
Preparing to replace libcomerr2 1.42-1ubuntu2.2 (using .../libcomerr2_1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement libcomerr2 ...
Preparing to replace libss2 1.42-1ubuntu2.2 (using .../libss2_1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement libss2 ...
Preparing to replace unattended-upgrades 0.76ubuntu1.1 (using .../unattended-upgrades_0.76ubuntu1.2_all.deb) ...
Unpacking replacement unattended-upgrades ...
Processing triggers for install-info ...
Processing triggers for man-db ...
Processing triggers for ureadahead ...
Setting up initramfs-tools (0.99ubuntu13.5) ...
update-initramfs: deferring update (trigger activated)
Setting up libcomerr2 (1.42-1ubuntu2.3) ...
Setting up comerr-dev (2.1-1.42-1ubuntu2.3) ...
Setting up libss2 (1.42-1ubuntu2.3) ...
Setting up unattended-upgrades (0.76ubuntu1.2) ...
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.13.0-79-generic
E: /usr/share/initramfs-tools/hooks/fixrtc failed with return 1.
update-initramfs: failed for /boot/initrd.img-3.13.0-79-generic with 1.
dpkg: error processing initramfs-tools (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Errors were encountered while processing:
initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
sudo apt-get remove initramfs-tools-bin
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
cron : Depends: adduser but it is not going to be installed
procps : Depends: initscripts
upstart : Depends: initscripts
Depends: mountall
Depends: ifupdown (>= 0.6.10ubuntu5)
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
what to do here?
If you were able to SSH into the instance using a given SSH key before, the most likely reason it would stop working is if you somehow removed that SSH key or if the SSH daemon wasn't running/was otherwise broken. It appears as though in the downgrade you broke this machine.
Why do you need this particular VM instance? Does it have important data? If so, you can shut it off, mount its disk using a fresh VM instance, and copy that data off.
If it runs a service, you should probably cut over to a new machine: even if you're able to get into the instance, there's no telling what still works and what doesn't.
i'm facing issue in bigbluebutton insatllation
Reading state information...
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
bigbluebutton : Depends: bbb-config but it is not going to be installed
gce-compute-image-packages : Depends: google-compute-engine but it is not going to be installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).

Debian package bug on squeeze?

My hand-made debian package wont install if i build it on Squeeze (well, a squeezechroot)?
If i built it on a wheezy box though it builds installable packages.
Note that it builds fine in either case. Im generating the debian packages using CMake/CPack.
The error message i get is:
user#buildbox:/builddir/packagename# dpkg -i packagename_1.0.3.deb
(Reading database ... 35116 files and directories currently installed.)
Unpacking packagename (from packagename_1.0.3.deb) ...
dpkg: error processing packagename_1.0.3.deb (--install):
unable to create `/usr/share/packagename/builddir/mixer_devices.txt.dpkg-new' (while processing `./usr/share/packagename/builddir/mixer_devices.txt'): No such file or directory
dpkg-deb: subprocess paste killed by signal (Broken pipe)
Errors were encountered while processing:
packagename_1.0.3.deb
Might it be that mixer_devices is not contained within the created deb file for some reason?
Just do an ar x packagename_1.0.3.deb and see what the tar file contains.