I'm attempting to edit the yum.repos.d file in order to disable unwanted repos and enable wanted repos without using the vi editor. Each time I use the method I know of it fails to make the changes. Does anyone know what I'm missing?
See Below:
yum-config-manager --disable \therepoid,anotherrepoid
Note: I'm not executing this from the yum.repos.d directory
First you need to run the command as root
Then you should get the list of repos:
yum repolist all
And then you can enable or disable particular repo(s) by repo_id from previous command
yum-config-manager --enable repo_id
For reference you can use RedHat documentation
Related
When using VirtualBox with a Centos6 image I cannot do yum udpate anymore, I have checked on the internet and it looks that Centos6 is deprecated.
[root#centos69 ~]# yum makecache
Failed to set locale, defaulting to C
Loaded plugins: fastestmirror, refresh-packagekit, security
Determining fastest mirrors
YumRepo Error: All mirror URLs are not using ftp, http[s] or file.
Eg. Invalid release/repo/arch combination/
removing mirrorlist with no valid mirrors: /var/cache/yum/x86_64/6/base/mirrorlist.txt
Error: Cannot find a valid baseurl for repo: base
[root#centos69 ~]# yum update
Failed to set locale, defaulting to C
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Update Process
Loading mirror speeds from cached hostfile
YumRepo Error: All mirror URLs are not using ftp, http[s] or file.
Eg. Invalid release/repo/arch combination/
removing mirrorlist with no valid mirrors: /var/cache/yum/x86_64/6/base/mirrorlist.txt
Error: Cannot find a valid baseurl for repo: base
There could be two possible solutions to solve this issue:
Edit the CentOS-Base.repo file with vim
vim /etc/yum.repos.d/CentOS-Base.repo
Delete or comment every line that starts with 'mirrorlist'.
And add the following line to every [section] of the file like [base], [updates]...
baseurl=https://vault.centos.org/6.10/os/$basearch/
Another solution could be running this command on an image that comes from this repo https://vault.centos.org/
/scripts/autorepair centos6_base_repo_is_no_more
More information about these solutions:
https://support.cpanel.net/hc/en-us/articles/360058490254--CentOS-6-End-of-Life-Notice
https://forums.cpanel.net/threads/yumrepo-error-and-cannot-find-valid-baseurl.682465/
Yes, like red hat 6.x centos 6 did go EOL in november 2020, hope you don't have anything sensitive stuff in that vm.
https://forums.centos.org/viewtopic.php?t=72710
You can change to use the vault at vault.centos.org. First you should disable any repo that no longer work. You can get a list of repos with
yum repolist
then you can disable them with
yum-config-manager --disable {reponame} {reponame}
like
yum-config-manager --disable base update
or just disable all of them
yum-config-manager |grep ^\\[|tr -d ']['|xargs yum-config-manager --disable
Once the broken repos are disabled you need to add the vault repo.
yum-config-manager --add-repo=https://vault.centos.org/6.10/os/x86_64/
After that you can install packages as needed but remember - it's no updates to anything so if security is a concern you need to change os to something newer that is supported.
followed this post and it worked.
https://www.getpagespeed.com/server-setup/how-to-fix-yum-after-centos-6-went-eol
The alternative method in the post:
vi /etc/yum.repos.d/CentOS-Base.repo
and replace the content with this:
[C6.10-base]
name=CentOS-6.10 - Base
baseurl=http://vault.epel.cloud/6.10/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=1
metadata_expire=never
[C6.10-updates]
name=CentOS-6.10 - Updates
baseurl=http://vault.epel.cloud/6.10/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=1
metadata_expire=never
[C6.10-extras]
name=CentOS-6.10 - Extras
baseurl=http://vault.epel.cloud/6.10/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=1
metadata_expire=never
[C6.10-contrib]
name=CentOS-6.10 - Contrib
baseurl=http://vault.epel.cloud/6.10/contrib/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=0
metadata_expire=never
[C6.10-centosplus]
name=CentOS-6.10 - CentOSPlus
baseurl=http://vault.epel.cloud/6.10/centosplus/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
enabled=0
metadata_expire=never
I try to install dropbox on Centos8, however Terminal gives strange errors. Tried different commands, same error.
Firstly downloaded *.rpm file from dropbox website, currently trying to install it.
Commands I tried:
rpm -ivh nautilus-dropbox-2020.03.04-1.fedora.x86_64.rpm
yum localinstall nautilus-dropbox-2020.03.04-1.fedora.x86_64.rpm
Error:
Last metadata expiration check: 0:18:27 ago on Thu 12 Mar 2020 03:46:17 PM EET
Error:
Problem: conflicting requests
nothing provides libgnome >= %{gnome_version} needed by nautilus-dropbox-2020.03.04-1.fc21.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[root#localhost Downloads]
Also tried --skip-broken and --nobest - but no luck.
Also tried sudo yum install libgnome but it gives error:
Last metadata expiration check: 9:51:39 ago on Thu 12 Mar 2020 02:42:06 PM UTC.
No match for argument: libgnome
Error: Unable to find a match: libgnome
I have:
[adminuser#localhost ~]$ cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core)
Tried to google this mistake, but no luck. Could you please give me any hint how I could overcome this?
Thank you
This is a bug in packaging. Contact Dropbox support and report it as a bug.
Technical details (just in case you are Dropbox employee):
During building rpm, when you use macro then it is expanded. Try yoursel:
$ rpm --eval '%{_bindir}'
/usr/bin
However, when the macro is not defined, you get original value:
$ rpm --eval '%{some_bullshit}'
%{some_bullshit}
So the macro gnome_version should likely contain some version, but this macro was not defined.
nothing provides libgnome
"libgnome" is about libgnome-2 → https://linux.dropbox.com/fedora/ → I.e. Fedora only packages. CentOS 8 has no libgnome* available.
https://www.dropbox.com/install-linux → Compile from source → CentOS 8
# dnf install nautilus-devel-3.28.1-10.el8.x86_64 python3-docutils
tar xvf nautilus-dropbox-2020.03.04.tar.bz2
cd nautilus-dropbox-2020.03.04/
./configure && make
# make install
Result : nautilus-dropbox-2020.03.04-1.el8.x86_64.rpm https://drive.google.com/file/d/1AcxlVdbWOzQvcoVOFYCiaVny9MzgC-Ea/view?usp=sharing
# rpm -Uvh nautilus-dropbox-2020.03.04-1.el8.x86_64.rpm : No issues.
First, realize that the command showing at the install page is for the headless installation. It will probably work, but my preference is to use Dropbox with nautilus integration.
This instructions assumes a installation of Dropbox with Nautilus integration.
We need to compile the installer from source.
a. Download last package
wget https://linux.dropbox.com/packages/nautilus-dropbox-2020.03.04.tar.bz2
b. Extract tarball
tar xjf ./nautilus-dropbox-2020.03.04.tar.bz2
c. Try to compile
cd nautilus-dropbox-2020.03.04; ./configure;
Then you get an Error:
Erro:
Problema: conflicting requests
- nothing provides libgnome >= %{gnome_version} needed by nautilus-dropbox-2020.03.04-1.fc21.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Now we need to install nautilus-devel and python3-docutils
NOTE: You will get configure: error: couldn't find docutils if forget python3-docutils.
This command will enable the PowerTools repository and install what is needed:
dnf --enablerepo=PowerTools install nautilus-devel python3-docutils
Now you can run ./configure && sudo make install
That's it. Go for the start menu type "Dropbox", it will start the installer.
Restore a local backup of Dropbox (optional)
If you have a local backup, turn of the network after you see the Dropbox folder created. Then copy all your files to that folder and turn it on after copy.
This solution worked for me running CentOS Linux release 8.2.2004 (Core).
I'm behind a firewall and I have the fzf.tar.gz package which has the content of the git repo. How can I install fzf offline?
The install command ~/.fzf/install is reaching out to github.com. I'm on Redhat with no internet connection.
https://github.com/junegunn/fzf
This is just what I observed, I can't guarantee I didn't miss anything:
First, clone fzf to FZF_DIR on an online PC, then,
I'd suggest you to execute 'install' on an online PC to get necessary files
~/.fzf/bin/fzf this one is downloaded by install script
~/.fzf.bash this one is generated by install script
cp ~/.fzf/bin/fzf $FZF_DIR/bin
copy $FZF_DIR (with fzf binary in it) and .fzf.bash into your offline PC
ln -s $FZF_DIR ~/.fzf
source .fzf.bash in your .bashrc
Entire FZF_DIR is needed because it includes some other useful scripts sourced by .fzf.bash.
I want to write a Dockerfile which exports a directory from a remote Subversion repository into the build context so I can work with these files in subsequent commands. The repository is secured with user/password authentication.
That Dockerfile could look like this:
# base image
FROM ubuntu
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# export my repository
RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory
# further commands, e.g. on container start run a file just downloaded from the repository
CMD ["/bin/bash", "path/to/file.sh"]
However, this has the drawback of printing my username and password on the screen or any logfile where the stdout is directed, as in Step 2 : RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory. In my case, this is a Jenkins build log which is also accessible by other people who are not supposed to see the credentials.
What would be the easiest way to hide the echo of username and password in the output?
Until now, I have not found any way how to execute RUN commands in a Dockerfile silently when building the image. Could the password maybe be imported from somewhere else and attached to the command beforehand so it does not have to be printed anymore? Or are there any methods for password-less authentication in Subversion that would work in the Dockerfile context (in terms of setting them up without interaction)?
The Subversion Server is running remotely in my company and not on my local machine or the Docker host. To my knowledge, I have no access to it except for accessing my repository via username/password authentication, so copying any key files as root to some server folders might be difficult.
The Dockerfile RUN command is always executed and cached when the docker image is build so the variables that svn needs to authenticate must be provided at build time. You can move the svn export call when the docker run is executed in order to avoid this kind of problems. In order to do that you can create a bash script and declare it as a docker entrypoint and pass environment variables for username and password. Example
# base image
FROM ubuntu
ENV REPOSITORY_URL http://subversion.myserver.com/path/to/directory
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# make it executable before you add it here otherwise docker will coplain
ADD docker-entrypoint.sh /enrypoint.sh
ENTRYPOINT /entrypoint.sh
docker-entrypoint.sh
#!/bin/bash
# maybe here some validation that variables $REPO_USER $REPO_PASSOWRD exists.
svn export --username="$REMOTE_USER" --password="$REMOTE_PASSWORD" "$REPOSITORY_URL"
# continue execution
path/to/file.sh
Run your image:
docker run -e REPO_USER=jane -e REPO_PASSWORD=secret your/image
Or you can put the variables in a file:
.svn-credentials
REPO_USER=jane
REPO_PASSWORD=secret
Then run:
docker run --env-file .svn-credentials your/image
Remove the .svn-credentials file when your done.
Maybe using SVN with SSH is a solution for you? You could generate a public/private key pair. The private key could be added to the image whereas the public key gets added to the server.
For more details you could have a look at this stackoverflow question.
One solution is to ADD the entire SVN directory you previously checked out on your builder file-system (or added as a svn:externals if your Dockerfile is itself in a SVN repository like this: svn propset svn:externals 'external_svn_directory http://subversion.myserver.com/path/to/directory' ., then do a svn up).
Then in your Dockerfile you can simply have this:
ADD external_svn_directory /tmp/external_svn_directory
RUN svn export /tmp/external_svn_directory /path/where/to/export/to
RUN rm -rf /tmp/external_svn_directory
Subversion stores authentication details (if it not disabled in configuration) at client side and use stored username|password on request for the subsequent operations on the same URL.
Thus - you have to run (successful) svn export in Dockerfile with username|password only once and allow SVN to use cached credentials (remove auth. options from command-line) later
I've created a local yum repository for RHEL 7 on a separate server. Then I used the "reposync" command to get the packages from RHN.
reposync --download-metadata --gpgcheck -l --repoid=rhui-REGION-rhel-server-rh-common --download_path=/rhel_security_repo/
reposync --download-metadata --gpgcheck -l --repoid=rhui-REGION-rhel-server-optional --download_path=/rhel_security_repo/
reposync --download-metadata --gpgcheck -l --repoid=rhui-REGION-rhel-server-releases --download_path=/rhel_security_repo/
After that, I executed the following command to create my repo:
createrepo --database /rhel_security_repo/
The repository was created successfully with over than 9000 packages as expected. What I am trying to do now is to have other boxes using this local repository. I've created a yum config file in the other boxes where the baseurl points to this server with the local yum repository.
[security-updates-rhel7]
name=Repository for RHEL7 security updates
baseurl=ip-server
enabled=1
gpgcheck=1
All the servers are able to talk to this server with the local yum repo and they can install packages from it.
The problem is I can't update packages when I run yum update --security:
Example:
yum --disablerepo="*" --enablerepo="security-updates-rhel7" --security update mariadb-libs
Loaded plugins: amazon-id, rhui-lb
--> 1:mariadb-libs-5.5.37-1.el7_0.x86_64 from #rhui-REGION-rhel-server-releases removed (updateinfo)
--> 1:mariadb-libs-5.5.40-2.el7_0.x86_64 from security-updates-rhel7 removed (updateinfo)
No packages needed for security; 1 packages available
Resolving Dependencies
However, if I run the command without --security, I can see available updates:
yum --disablerepo="*" --enablerepo="security-updates-rhel7" update mariadb-libs
Loaded plugins: amazon-id, rhui-lb
Resolving Dependencies
--> Running transaction check
---> Package mariadb-libs.x86_64 1:5.5.37-1.el7_0 will be updated
---> Package mariadb-libs.x86_64 1:5.5.40-2.el7_0 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================
Updating:
mariadb-libs x86_64 1:5.5.40-2.el7_0 security-updates-rhel7 753 k
Transaction Summary
==============================================================================================================================
Upgrade 1 Package
Total download size: 753 k
Is this ok [y/d/N]:
It seems I lost the security metadata when I did the reposync.
Any ideas what the problem could be?
Etan has the right idea; those are the two ways we found to get around the issue with RHEL6. You could pull the Redhat metadata straight out of your yum cache and copy it into your local repo, and that works 90% of the time... but 10% it will give you random failures if Redhat happens to be updating a repository while you are syncing it.
Redhat has a guide for how to graft security metadata into your local repo in RHEL5/6, I assume it works similarly in RHEL7. If you have a Redhat Support account, see: https://access.redhat.com/solutions/55654
If you don't, below is my own take on it:
Run your reposync command with --download-metadata and other trimmings, but I would start with one repo at a time and put each one in it's own directory similar to how Redhat does it. ex. mkdir -p /path/to/repo-id && reposync -l -n --download-metadata -r repo-id -p /path/to/repo-id/.
Pull the abcdefghij-updateinfo.xml.gz metadata files from Redhat, which contains security metadata for each repository. Do this by running yum list-sec and then look for it in your local yum cache, under each repository's subdirectory. Probably somewhere in /var/cache/yum/arch/7Server/repo-id.
Run createrepo on just that repository. createrepo -v /path/to/repo-id/
Go into /path/to/repo-id/ and then into the repodata subfolder. Copy in the abcdefghij-updateinfo.xml.gz from your local yum cache into the repodata folder, but rename it to remove the hash at the beginning. Left with a file called updateinfo.xml.gz
Use the modifyrepo command to insert the security metadata into that repo's table of contents (repomd.xml) file.
modifyrepo /path/to/repo-id/updateinfo.xml.gz /path/to/repo-id/