How to restore mongodb backup into a new instance? - mongorestore

I have a tgz file and I extracted it into a folder, there was a lot of gz files then I extracted them too, now I have a folder with some bson and json files.
I Installed mongod v4.2.14 on a debian machine and after that I do nothing to db just connected to it for the first time.
Please help me to restore this folder into this instance. my problem is that when I use mongorestore without authentication it gives me error and I know that in mongodb we dont have a global username and pasword, I dont know what to place in -u and -p ?
'mongorestore -u WHAT!!! -p WHAT!!! --authenticationDatabase admin -d zaer_db ./zaer_db2
'

the problem was that I created a db before and setup a password for it. I reinstalled linux and reinstalled mongodb and mongorestore worked fine without password.

Related

Can't save to crontab via SSH, but can when logged in locally

I have a remote headless server (MacOS BigSur 11.3.1). When I log in via ssh (with either the root user or regular user), I am unable to save to the crontab.
When I use the following command:
% crontab -e
I can see a cronjob that I saved when I was logged in locally (not via ssh). After editing and exiting the crontab, I get the following error:
crontab: installing new crontab
crontab: tmp/tmp.1028: Operation not permitted
crontab: edits left in /tmp/crontab.kKYx3tt4c1
While logged into ssh, I have instead tried to edit the crontab with this command:
% sudo crontab -e
To my surprise, the cronjob that I saved when logged in locally is not listed. It is as if it is a different crontab for a different user. In any case, I can't save to the crontab when using sudo either. It gives the exact same error as above.
I have followed the advice of a few internet posts suggesting allowing the cron and sshd executables "Full Disk Access" through the Mac System Preferences. However, the same error persists.
I'm not sure what to try next.
So the issue was solved by giving sshd-keygen-wrapper full disk access. Don't ask me why that needs it, but it is working now. I hope this helps anyone with the same issue.

How to access a folder via SMB protocol from ASP Net Core [duplicate]

I am trying to setup a script that will:
Connect to a windows share
Using LOAD DATA LOCAL INFILE, upload the two files into their appropriate db tables
Umount share
Situation:
I can currently vpnc into this remote machine
Problem:
I cannot
mount -t cifs //ip.address/share /mnt/point -o username=u,password=p,port=445
mount error(110) Connection timed out
I am attempting to do this manually first
Remote server is open to port 445
Questions:
Do I even need to vpnc in first?
Do I need to do route add for the remote ip/mask/gw after vpnc?
Thank you!
The mount.cifs file is provided by the samba-client package. This can be installed from the standard CentOS yum repository by running the following command:
yum install samba samba-client cifs-utils
Once installed, you can mount a Windows SMB share on your CentOS server by running the following command:
Syntax:
mount.cifs //SERVER_ADDRESS/SHARE_NAME MOUNT_POINT -o user=USERNAME
SERVER_ADDRESS: Windows system’s IP address or hostname
SHARE_NAME: The name of the shared folder configured on the Windows system
USERNAME: Windows user that has access to this share
MOUNT_POINT: The local mount point on your CentOS server
I am mounting to a share from \\10.11.10.26\snaps
Make a directory under mount for your reference
mkdir /mnt/mymount
Now I am mounting the snaps folder from indiafps02, User name is the Domain credentials, i.e. Mydomain in this case
mount.cifs //10.11.10.26/snaps /mnt/mymount -o user=Girish.KG
Now you could see the content by typing
ls /mnt/mymount
So, after performing your task, just fire umount command
umount /mnt/mymount
That's it. You are done.
no need to install "samba" and "samba-client", only "cifs-utils" using command
yum install cifs-utils
after that in windows share the folder you would like to mount in centos if you didn't do that already ("c:\interpub\wwwroot" in my case).
make sure you share it with a specific username whom your know the password for ("netops" in my case).
create a directory in centos in which you would like to mount the windows share in to ("/mnt/cm" in my case).
after that run that simple command as a root
mount.cifs //10.16.0.160/wwwroot /mnt/cm/ -o user=netops
centos will prompt you for the windows username password.
you are done.

Docker wrong permission apache2

I have a problem whith my installation of docker. When I launch my docker-compose up I have this error :
front_1 | /var/lock/apache2 already exists but is not a directory owned by www-data.
front_1 | Please fix manually. Aborting.
I have this error because I add this line in my dockerfile conf :
RUN usermod -u 1000 www-data
But if I delete this line, my symfony project doesn't work with docker.
Do you have any ideas to solve my problem ?
Best regards
As I see it, you are trying to change UID of user www-data inside docker to have the same ID as host machine user UID (you), so you can open project files in your IDE.
This introduces file permissions problems on apache2 service, which can't read it's own files (config, pid,...), simply because it is not the same user anymore.
Quick 'dirty' solution is to change only owner of symfony project files to UID 1000, but keep group (GID) to the www-data. This applies only for dev machine. Else you don't needed it. Run command inside container.
chown -R 1000:www-data /home/project
You can create some bash alias inside docker to have it at hand.
Other option is to use ACL which will set existing files and folder with permissions, which will get inherited to newly created files under given folder. This could be put to bootstrap script inside container. But only for DEV mode. This way you won't need to run chown.
chown -R 1000:www-data /home/project #set for existing files
/usr/bin/setfacl -R -m u:www-data:rwx -m u:0:rwx -m u:1000:rwx /home/project
/usr/bin/setfacl -dR -m u:www-data:rwx -m u:0:rwx -m u:1000:rwx /home/project
Each -m is for a different user. First is www-data (apache2), second is 0 (root) and third is 1000 (you).
Remember UID can change anytime. So this could create security hole if mentioned users are not having proper UID.
I used second method only for folders, where PHP via apache2 sets permissions (uploaded files, cache,...), but host user needs to access these files.

Impossible write into the AJXP_DATA_PATH folder ajaxplorer

I uploaded ajaxplorer "pydio-core-5.0.4.zip" to my server and after I extracted files into a folder in the server i request the folder to starting install but i get this message :
"Impossible write into the AJXP_DATA_PATH folder: Make sure to grant write access to this folder for your webserver!"
i made the folder : /data permissions to 777 and it did not make change ..
any solve ?
I'v got the same problem few hours ago.
The problem:
You put full permissions (777) to the data folder, but subfolders don't get it.
The solution:
sudo chmod -R 777 data
sudo chmod -R 777 data
or
sudo mkdir -m 777 your_pydio_path/data/tmp/sessions
I know this is old, but I was having the same issue with pydio-core-6.0.8. Also, I'm going to preface this by saying that I am a php noob. But I was able to resolve my issue without a chmod 777 command. Instead, I made the nginx user the owner of the data directory.
chown -R nginx /path/to/pydio-core-6.0.8/data
And then made sure that php-fpm was running as the nginx user with the two php-fpm.conf settings
listen.owner = nginx
user = nginx
After restarting php-fpm, I was able to load the pydio page which went into the startup wizard.
This command is so easy! But it's dangerous!
Go to /var/www/pydio for apache2 or /usr/share/nginx/html/pydio for nginx and try:
chmod ugo+x data
It's more protected!

rsync deploy and file/directories permissions

I'm trying to use rsync to deploy my website that resides on a shared web host.
Phpsuexec is running on it and that caused me problems with permissions on files and directories I've transfered via rsync. Actually files should be set to 644 and directories to 755, otherwise I get a 500 error.
After several attempts, I came with this rsync command:
rsync -avz -e ssh --chmod=Du=rwx,go=rx,Fu=rw,og=r -p --exclude-from=/var/www/mylocalfolder/.rsyncignore /var/www/mylocalfolder/ user#mywebsite.net:~/
Unfortunately this command doesn't works as expected because all the sent directories have been set to 744. On the other hand, files permission have been correctly set on 644.
I can't understand what is wrong.
P.S. I use Linux on my local machine.
Try it like this :
--chmod=Du=rwx,Dg=rx,Do=rx,Fu=rw,Fg=r,Fo=r
It worked for me.