Error on cloning fossil: "Attempt to write a readonly database" - repository

This is the first time I've attempt to host a fossil repository on my personal server. When I try to clone a project on Windows 7, I get a bizarre message:
PS [folder]> fossil clone 'http://[hostName]/cgi-bin/repo/[repoName]' [repoName].fossil
Bytes Cards Artifacts Deltas
Sent: 53 1 0 0
Received: 218 1 0 0
Sent: 58 1 0 0
Error: Database error: attempt to write a readonly database
UPDATE event SET mtime=(SELECT m1 FROM time_fudge WHERE mid=objid) WHERE objid IN (SELECT mid FROM time_fudge);DROP TABLE time_fudge;
Received: 218 1 0 0
Total network traffic: 515 bytes sent, 858 bytes received
C:\Program Files (x86)\Fossil\fossil.exe: server returned an error - clone aborted
What does the error message mean? Where did I go wrong?

Alright, I think I figured out the writing problem. I've changed the group permissions of each fossil file to www-data, and allowed the group to read and write.
$ sudo chown :www-data *.fossil
$ sudo chmod g+w *.fossil
That seemed to have solved that problem.

Related

Unable to install nvm on WSL due to network issues

I'm trying to install nvm using curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash on WSL2, but I'm getting different errors. Initially, the curl command would return the following:
> $ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0curl: (6) Could not resolve host: raw.githubusercontent.com
After running netsh int ip reset in Windows, which was suggested in another question, the same command started timing instead:
> $ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:04:59 --:--:-- 0
curl: (28) Connection timed out after 300000 milliseconds
I've also tried manually saving the install.sh to my machine and running it locally (after setting its permissions with chmod +x install.sh), but that returns a similar error:
> $ ./install.sh
=> Downloading nvm from git to '/home/mparra/.nvm'
=> Cloning into '/home/mparra/.nvm'...
fatal: unable to access 'https://github.com/nvm-sh/nvm.git/': Failed to connect to github.com port 443: Connection timed out
Failed to clone nvm repo. Please report this!
I can successfully ping github.com. ping -c 100 github.com returns the following:
--- github.com ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99156ms
rtt min/avg/max/mdev = 15.280/20.739/85.205/9.141 ms
This issue suggests that a Windows update resolved the issue, but that's not an option for me since it's a work machine and I can't update beyond build 18363.2039. I've also checked that my VPN is not enabled and I set my DNS to 8.8.8.8 and 8.8.4.4, which had no effect.
Please try the following in your WSL
sudo rm /etc/resolv.conf
sudo bash -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'
sudo bash -c 'echo "[network]" > /etc/wsl.conf'
sudo bash -c 'echo "generateResolvConf = false" >> /etc/wsl.conf'
sudo chattr +i /etc/resolv.conf
I can install with curl.
I have a feeling you are probably correct about this being the same issue mentioned on Github that was resolved in a Windows update.
If that's truly the case, you are probably going to continue to run into issues even after getting nvm installed. For instance, nvm probably will have trouble downloading Node releases.
The easiest solution that I can propose, if it works for you, is to simply convert to WSL1 instead of WSL2. WSL1 will handle most (but not all) Node use-cases just as well as WSL2. And WSL1 handles networking very differently than WSL2. If the Windows networking stack is working fine for you, then WSL1's should as well.
As noted in that Github issue, this seemed to be a problem that occurred only in Hyper-V instances. WSL2 runs in Hyper-V, but WSL1 does not.
If you go this route, you can either:
create a copy of your existing WSL2 distribution and convert that copy to WSL1. From PowerShell:
wsl --shutdown
wsl -l -v # Confirm <distroname>
wsl --export <distroname> path\to\backup.tar
mkdir .\path\for\new\instance
wsl --import WSL1 .\path\for\new\instance path\to\backup.tar --version 1 # WSL1 can be whatever name you choose
wsl -d WSL1
Note that you'll be root, by default. To change the default user, follow this answer.
Or, just convert the WSL2 instance to WSL1:
wsl --shutdown
wsl -l -v # Confirm <distroname>
wsl --export <distroname> path\to\backup.tar # Just in case
wsl --set-version <distroname> 1
If WSL1 doesn't work for you (at least in the short term until your company pushes that update), then there may be another option similar to the one mentioned in this comment on that Github issue. Let me know if you need to go that route, and I'll see if I can simply that a bit.

Download of whole folders of files from ssh

I need to select and download many folders stored on a computer that I only have access to with remote ssh connection. I create a list ("list.txt") to download only the folders of my interest, I
tried using a "for" loop with
for i in "list.txt"; do
scp -r /pwd/of/folder/of/origen/ /pwd/of/folder/destiny;
done
But donĀ“t read my list and dowload all folders,
also I tried with
for i in "list.txt"; do
rsync -Pavizuh /pwd/of/folder/of/origen/$i /pwd/of/folder/destiny;
done
but send mensagge:
sent 29 bytes received 20 bytes 98.00 bytes/sec
total size is 0 speedup is 0.00
rsync error: some files could not be transferred (code 23) at /System/Volumes/Data/SWE/macOS/BuildRoots/d7e177bcf5/Library/Caches/com.apple.xbs/Sources/rsync/rsync-55/rsync/main.c(996) [sender=2.6.9]
building file list ...
rsync: link_stat "/Users/rtorres/daniela/Proyecto/Anotacion/Strain9998" failed: No such file or directory (2)
0 files to consider
What can I do ?
Thanks !
You want the contents of "list.txt", not the literal name "list.txt".
for i in $(cat list.txt); do
scp -r server:/pwd/of/folder/of/origen/$i /pwd/of/folder/destiny/$i
done

Metasploitable 3 - System error 67

I am trying to set up Metasploitable 3 (VirtualBox) on my Ubuntu 16.04.
I have done everything according to the guidelines of the inventors (https://github.com/rapid7/metasploitable3) when it comes to dependencies etc.
However, when I'm trying to start it (via vagrant up --provision win2k8) I get this nasty little error, that I just can't fix.
It always says:
win2k8: System error 67 has occurred.
win2k8: The network name cannot be found.
The following WinRM command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
cmd /q /c "c:\tmp\vagrant-shell.bat"
Stdout from the command:
CMDKEY: Credential added successfully.
Stderr from the command:
System error 67 has occurred.
The network name cannot be found.
I just can't find anything out on the internet. I only "know" it has something to do with network settings. But I don't know what to do now.
I'd appreciate some help!

Not able to mount azure file share into local RHEL7 VM

I want to mount(symlink) from Azure file share to a local RHEL7 VM . I am using the following command
mount -t cifs //<storage-account-name>.file.core.windows.net/<share-name> /mymountpoint -o vers=3.0,username=<storage-acc-name>,password=<pwd>,dir_mode=0777,file_mode=0777,sec=ntlmssp,mfsymlinks
but getting the following error
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
`The dmesg | tail gives the following log
[root#googleapps ~]# dmesg | tail
[98383.619149] fs/cifs/smb2misc.c: SMB2 data length 0 offset 0
[98383.619151] fs/cifs/smb2misc.c: SMB2 len 77
[98383.619163] fs/cifs/transport.c: cifs_sync_mid_result: cmd=1 mid=1 state=4
[98383.619168] Status code returned 0xc0000022 STATUS_ACCESS_DENIED
[98383.619175] fs/cifs/smb2maperror.c: Mapping SMB2 status code -1073741790 to POSIX err -13
[98383.619177] fs/cifs/misc.c: Null buffer passed to cifs_small_buf_release
[98383.619181] CIFS VFS: Send error in SessSetup = -13
[98383.619185] fs/cifs/connect.c: CIFS VFS: leaving cifs_get_smb_ses (xid = 59) rc = -13
[98383.619297] fs/cifs/connect.c: CIFS VFS: leaving cifs_mount (xid = 58) rc = -13
[98383.619300] CIFS VFS: cifs_mount failed w/return code = -13`
Try removing the Symlink option then manually linking it afterwards. I did a quick test on my machine, a RHEL V7 (Brand new for testing) on Azure, and linked a file share with it using the following steps:
1- Manually create the mount point dir, then run the following:
sudo yum install cifs-utils
sudo mount -t cifs //USERNAME.file.core.windows.net/FILESHARE ~/mountpoint/ -o vers=3.0,username=<>,password=<>,dir_mode=0777,file_mode=0777,sec=ntlmssp
Then try using the symlink afterwards.
I had the similar problem. Read most of the forums tried all in the end found that password used by me was wrong and it has to be SSH key which is generated with storage account. I got hint to this by using trouble shooting script by Azure support.
You can find this script here :
https://gallery.technet.microsoft.com/Troubleshooting-tool-for-02184089
Bash script found here can be used to go through series of probable issues and will give you a fair idea of the problem.
Sometimes you will get Status code returned 0xc0000022 STATUS_ACCESS_DENIED if you do not have the proper "Public network access" setting.
Set this setting to "All networks" or add your Server IP/CIDR to the firewall exclusion list.

Hortonworks Nodemanager starts but then fails: Connection refused to :8042

I'm trying to solve an issue with a newly added datanode on our Hortonworks cluster. The YARN namenode manager of the node would fail, shortly after starting. The following error message log is returned:
Connection failed to http://(ipaddress):8042/ws/v1/node/info (Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/alerts/alert_nodemanager_health.py", line 166, in execute
connection_timeout=curl_connection_timeout, kinit_timer_ms = kinit_timer_ms)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/curl_krb_request.py", line 198, in curl_krb_request
_, curl_stdout, curl_stderr = get_user_call_output(curl_command, user=user, env=kerberos_env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
raise ExecutionFailed(err_msg, code, files_output[0], files_output[1])
ExecutionFailed: Execution of 'curl --location-trusted -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/4268dd36-9f72-4be0-8d82-5f0a124a3a72 -c /var/lib/ambari-agent/tmp/cookies/4268dd36-9f72-4be0-8d82-5f0a124a3a72 http://gdcdrwhdb821.dir.ucb-group.com:8042/ws/v1/node/info --connect-timeout 5 --max-time 7 1>/tmp/tmp7pZrbM 2>/tmp/tmpgM4wdg' returned 7. % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed connect to (ipaddress):8042; Connection refused
)
This doesn't really tell me WHY the connection was refused though, except that whatever Yarn process corresponds to port 8042 isn't running:
netstat -tulpn | grep 8042
I've been looking for another nodemanager log perhaps with more information, but cannot find anything useful under /var/log/hadoop-yarn or the yarn.nodemanager.local-dirs / yarn.nodemanager.log-dirs
Are there other places I can look for yarn nodemanager error logs? Does anyone know what could be causing this?
Edit: After re-checking I found this useful bit in /var/log/hadoop-yarn/yarn/yarn-yarn-nodemanager-(ipaddress).log
2017-04-19 14:01:14,670 FATAL nodemanager.NodeManager (NodeManager.java:initAndStartNodeManager(549)) - Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: java.lang.ClassNotFoundException: org.apache.spark.network.yarn.YarnShuffleService
Did you able to fix this?
I faced the similar issue today.
I stopped YARN in my HDP cluster and deleted /var/log/hadoop-yarn/nodemanager/recovery-state directory and started YARN again.
The nodemanager is running without failing now.
Not sure if this helps now. Probably you might have already solved it.
You are using external shuffle service. This runs as an auxiliary service inside nodemanager service. Currently it's not able to find shuffle service jar in classpath.
Please add location of shuffle service jar to yarn.application.classpath in yarn-site.xml
It is also working fine in my side. Please stop the yarn service on the specific node not full YARN service.
I stopped YARN in my HDP cluster and deleted /var/log/hadoop-yarn/nodemanager/recovery-state directory and started YARN again.
This worked for me too. I think that was permission file problem.
Need to increase timeout of healthy check in alerts.