apt-mirror not syncing folders - packaging

I am trying to sync below files from a remote server to local machine using apt-mirror :
(myroot)
1) Packages.gz
2) Too1 (contains deb file)
--- tool1.deb
3) Tool2 (contains deb file)
--- tool2.deb
In my local machine mirror.list, entry looks like this :
deb http://(remote-server-ip)/myroot
when I run the apt-mirror command, it downloads only Packages.gz and it ignores tool1 and tool2 folders from remote.
It looks like I need to maintain some standard folder structure on remote to allow apt-mirror to sync.
Anyone, could help me out ?
Thanks a ton !!

Related

VS Code on Chromebook Penguin with Remote SSH: Could not establish conntection to the "servername", the VS Code Server failled to start

I installed VS code on Penguin on an Acer Chromebook R11.
I followed the steps for Debian here : https://code.visualstudio.com/docs/setup/linux
It works like a charm but I need to connect to my remote dev server using the official extension RemoteSSH by Microsoft.
Then I configured a .ssh folder for the user with correct permissions : https://gist.github.com/grenade/6318301
Now I get :
Could not establish conntection to the "servername", the VS Code Server failled to start
The detailled log is :
[11:39:39.572] Resolver error: The VS Code Server failed to start
[11:39:39.609] TELEMETRY: {"eventName":"resolver","properties":{"outcome":"failure","reason":"ExitCode","askedPw":"0","askedPassphrase":"1","asked2fa":"0","askedHostKey":"0","gotUnrecognizedPrompt":"0","remoteInConfigFile":"1"},"measures":{"resolveAttempts":1,"exitCode":32,"retries":1}}
[11:39:39.620] ------
I don't have any idea of what it could be. Do you have one ?
I was having a similar issue and I resolved it by deleting the ".vscode-server" directory that was created on my server. It's a hidden directory that is created in the home directory (you can use "ls -la" to display all the files I believe). My hunch is that some incorrect data is being cached there so deleting the directory will give you a clean slate.
After deleting, you can try connecting again through remote-ssh on vscode.

gsutil cp / download file to windows server

I'm very new at this and need some help; I'm sure I'm not doing something right. I have a Synology NAS that has a cool options to sync files to Google cloud storage. This is a great way to get my backups off site 
I have my backups syncing to a cold line storage bucket. Now that my files are syncing I'm looking to document the process if I need to retrieve them.
I want to download a whole folder and all of the files inside it to a windows server. I installed the gsutil and trying to run this command.
gsutil -m cp -R dir gs://bhp_backup_sync/backup/foldername
but after I run this I get the following exception.
CommandException: No URLs matched: dir
CommandException: 1 file/object could not be transferred.
NOOB here what am I missing?

Vmware Workstation - Cannot open disks xxxx or one of the snapshot disks it depends on

I'm running Centos7 using Vmware workstation on windows 7 laptop. All was well until I restarted my laptop this morning & my VM started complaining as below
The parent virtual disk has been modified since the child was created. The content ID of the parent virtual disk does not match the corresponding parent content ID in the child
Cannot open the disk 'C:\Users\<user>\Documents\Virtual Machines\CentOS 64-bit\CentOS 64-bit-000003.vmdk' or one of the snapshot disks it depends on.
Module 'Disk' power on failed.
Failed to start the virtual machine.
Below is the image of the folder containing the VM & the image of the VM itself.
I've looked through the vmware log & found the disk ID
2016-03-21T15:56:15.685+13:00| vmx| I125: DISKLIB-LINK : Opened 'C:\virtmac\CentOS 64-bit.vmdk' (0xe): monolithicSparse, 419430400 sectors / 200 GB.
2016-03-21T15:56:15.685+13:00| vmx| I125: DISKLIB-LINK : DiskLinkIsAttachPossible: Content ID mismatch (parentCID b0f614a0 != a0549cb5)
All you have to do is to delete the .lck file from the folder of your vmdk files.
It is generally present at
C:\Users\UserName\Documents\Virtual Machines\VMWareName
Also you can just move the lck files one folder up to ensure you do not delete any other file by mistake.
Deleting all .lck files in the folder should technically solve the problem.
If you use VMs such as Kali Linux, it might happen that the AV quarantines parts of the .vmdk files. In my case I had to restore it from the Windows Defender quarantined files see the screenshot attached
If you are using Kali in VM,
Go to the main directory (Configuration File).
Determine the missing file partition. Ex: kali-linux-2022.3-vmware-amd64-s003.vmdk
Copy any other partition and give it a name of a messing partition.
> copy kali-linux-2022.3-vmware-amd64-s004.vmdk kali-linux-2022.3-vmware-amd64-s003.vmdk
In case you face a Busybox Initramfs Error
type (initramfs) fsck /dev/sda1 -y

nfsnobody User Privileges

I have setup an NFS file share between two CentOS 6, 64 machines. On the server the folder being shared was originally owned by the root user. On the client it turned up as being owned by nfsnobody. When I tried to write to the folder from the client I got a permissions error. So I changed the folder ownership on the server to nfsnobody and chmod'd it to 777. However, still no joy - I continue to get a permissions error. Clearly, there is more to this. I would be much obliged to any Linux gurus out there (I personally wouldn't merit being called anything more than a newbie) who might be able to help fix this issue.
Edit - I should have mentioned that trying to write to the shared folder from the client actually manages to create a file entry. However, the file size is 0 and the permissions error is reported.
The issue here is to do with the entry in /etc/exports. It should read
folder ip(rw,**all_squash**,sync,no_subtree_check)
I had missed the all_squash bit. That apart, make sure that the folder on the server is owned by nfsnobody. On my setup both my client and server nfsnobodies ended up with a user id if 65534. However, it is well worth checking this (/etc/groups) or else... .
Here are a couple of useful references
How to setup an NFS SErver
NFS on CentOS
For the benefit of anyone looking to setup an NFS server I give below what worked for me on my CentOS 6 64bit machines.
SERVER
yum install nfs-utils nfs-utils-lib - install NFS
rpm -q nfs-utils - check the install
/etc/init.d/rpcbind start
chkconfig --levels 235 nfs on
/etc/init.d/nfs start
chkconfig --level 35 rpcbind on
With this done you should create the folder you want to share
mkdir folder
chown 65534:65534 folder
chmod 755 folder
Now define the folder to be shared/exported. Use your favorite text editor (vi or whatever) to
open/create /etc/exports
folder clientIP (rw,all_squash,sync,no_subtree_check)
Client
Install, check, bind and start as above
mount -t nfs serverIP:folder clientFolderLocation
If all goes well you should now be able to write a little script on your client
<?php
$file = $_SERVER['DOCUMENT_ROOT']."/../nfsfolder/test.txt";
file_put_contents($file,'Hello world of NFS!');
?>
browse to it and find that test.txt now exists on the server with the content "Hello world of NFS". In the example I have placed my mounted drive one level before document_root.

PSCP copy files from godaddy to my windows machine

I want to take backup of my website which is hosted on godaddy.
I used pscp command from my windows dos and try to download whole public_html folder.
my command is :
pscp -r user#host:public_html/ d:\sites\;
Files are downloading properly and folders also. But the issue is public_html and other subfolders has two folder like "./" and "../". Due to these two folders my copy is getting failed and I am getting
"security violation: remote host attempted to write to " a '.' or '..' path!"error.
Hope any one can help for this.
Note : I have only ssh access and have to download it from ssh commands itself.
Appending a star to the source should fix it, e.g.
pscp -r user#host:public_html/* d:\sites\;
Also you can do same thing by not adding '/' at the end of your source path.
For eg.
pscp -r user#host:public_html d:\sites
Above command will create public_html directory if not exists at your destination (i.e. d:\sites).
Simply we can say using above command we can make a as it is clone of public_html at d:\sites.
One important thing: You need to define the port number over here "-P 22".
pscp -r -P 22 user#host:public_html/* D:\sites
In my case, it works when I use port number 22 with the above script.