Postgres Configuration messed up after OSX Mavericks - ruby-on-rails-3

After I installed Mavericks my postgres configuration seems completely messed up. I installed the dev tools for Maverick from xcode and I've tried putting host:localhost in the db yml but still if I try to run rails s:
could not connect to server: No such file or directory (PGError)
Is the server running locally and accepting
connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?
If I try to start postgres manually with:
pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
I get: server starting
but pg_ctl -D /usr/local/var/postgres status returns pg_ctl: no server running ??
I tried to reinstall pg gem and reload postgresql but to no avail. brew info postgres returns:
postgresql: stable 9.3.1
http://www.postgresql.org/
Conflicts with: postgres-xc
/usr/local/Cellar/postgresql/9.3.1 (2919 files, 39M) *
Since I did brew reinstall postgress I know get this:
The data directory was initialized by PostgreSQL version 9.2,
which is not compatible with this version 9.3.1.
I have a postgres query tool that doesn't seem to have any trouble connecting so I know the data is still there.
I would really appreciate if someone can help me figure this out, thanks.

The error The data directory was initialized by PostgreSQL version 9.2,
which is not compatible with this version 9.3.1. is caused by the incompatible data directory between 9.2 and 9.3.
if you don't need the old data in your 9.2 db, it's very easy to solve this problem:
rm -rf /usr/local/var/postgres
where /usr/local/var/postgres is the default data directory path. you may need to change the path according to your setting.
After deleting the old data directory, init a new one with 9.3:
initdb /usr/local/var/postgres -E utf8
Then you are good to go!
====
if you need to migrate the old data from 9.2 to 9.3:
a) rename your old data directory:
mv /usr/local/var/postgres /usr/local/var/postgres9.2
b) init a new 9.3 db (create a new data directory):
initdb /usr/local/var/postgres -E utf8
c) migrate:
pg_upgrade \
-b /usr/local/Cellar/postgresql/9.2.4/bin/ \
-B /usr/local/Cellar/postgresql/9.3.1/bin/ \
-d /usr/local/var/postgres9.2 \
-D /usr/local/var/postgres \
-v
-b is your old binary while -B is your new binary. you can get them through brew info postgres.
-d is the renamed old data directory while -D is the new data directory you just created at step b.
then you would get the following message:
Creating script to analyze new cluster ok
Creating script to delete old cluster ok
Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
analyze_new_cluster.sh
Running this script will delete the old cluster's data files:
delete_old_cluster.sh
d) rock with your postgres 9.3!

The most conservative way to upgrade PostgreSQL entails needing to:
Compile a version of PostgreSQL using 9.2.X sources (e.g. ./configure --prefix=/tmp/myPg-9.2 && make && make install)
Start the database using the 9.2 binaries (e.g. /tmp/myPg-9.2/bin/postgres -D /usr/local/var/postgres, this will keep PostgreSQL in the foreground in this terminal)
Dump the database using pg_dumpall
Shut down the 9.2 database.
Move /usr/local/var/postgres to someplace safe (e.g. /usr/local/var/postgres-9.2)
initdb a new database using the new 9.3 binaries.
Load the dump from pg_dumpall.
Make sure you hold on to a copy of your old 9.2 data directory until you've successfully recovered. Once you've determined that you've fully recovered from this situation, you can blow away your temp 9.2 installation in /tmp/myPg-9.2 and the old 9.2 data directory /usr/local/var/postgres-9.2. I'd make a backup of /usr/local/var/postgres-9.2 and would sit on it for a few months "just in case" (e.g. tar cjpf /usr/local/var/postgres-9.2-2013-10-31.tar.bz2 /usr/local/var/postgres-9.2).
Per comment, adding a few extra steps:
Compile a version of PostgreSQL:
cd /tmp
Download the latest .bz2 tarball of PostgreSQL's 9.2 source from http://www.postgresql.org/ftp/source/ (currently 9.2.5 as of 2013-10-31).
tar xjpf postgresql-9.2.5.tar.bz2
cd postgresql-9.2.5
./configure --prefix=/tmp/myPg-9.2 - Don't run this as root
make - Also don't run this as root
make install - Frequently you would do this as root, but you don't need to this time because you're installing in to /tmp where you have permissions to install. If your prefix was /usr/local you would have to run this command as root.

Related

Can't install nautilus-dropbox on Centos 8

I try to install dropbox on Centos8, however Terminal gives strange errors. Tried different commands, same error.
Firstly downloaded *.rpm file from dropbox website, currently trying to install it.
Commands I tried:
rpm -ivh nautilus-dropbox-2020.03.04-1.fedora.x86_64.rpm
yum localinstall nautilus-dropbox-2020.03.04-1.fedora.x86_64.rpm
Error:
Last metadata expiration check: 0:18:27 ago on Thu 12 Mar 2020 03:46:17 PM EET
Error:
Problem: conflicting requests
nothing provides libgnome >= %{gnome_version} needed by nautilus-dropbox-2020.03.04-1.fc21.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[root#localhost Downloads]
Also tried --skip-broken and --nobest - but no luck.
Also tried sudo yum install libgnome but it gives error:
Last metadata expiration check: 9:51:39 ago on Thu 12 Mar 2020 02:42:06 PM UTC.
No match for argument: libgnome
Error: Unable to find a match: libgnome
I have:
[adminuser#localhost ~]$ cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core)
Tried to google this mistake, but no luck. Could you please give me any hint how I could overcome this?
Thank you
This is a bug in packaging. Contact Dropbox support and report it as a bug.
Technical details (just in case you are Dropbox employee):
During building rpm, when you use macro then it is expanded. Try yoursel:
$ rpm --eval '%{_bindir}'
/usr/bin
However, when the macro is not defined, you get original value:
$ rpm --eval '%{some_bullshit}'
%{some_bullshit}
So the macro gnome_version should likely contain some version, but this macro was not defined.
nothing provides libgnome
"libgnome" is about libgnome-2 → https://linux.dropbox.com/fedora/ → I.e. Fedora only packages. CentOS 8 has no libgnome* available.
https://www.dropbox.com/install-linux → Compile from source → CentOS 8
# dnf install nautilus-devel-3.28.1-10.el8.x86_64 python3-docutils
tar xvf nautilus-dropbox-2020.03.04.tar.bz2
cd nautilus-dropbox-2020.03.04/
./configure && make
# make install
Result : nautilus-dropbox-2020.03.04-1.el8.x86_64.rpm https://drive.google.com/file/d/1AcxlVdbWOzQvcoVOFYCiaVny9MzgC-Ea/view?usp=sharing
# rpm -Uvh nautilus-dropbox-2020.03.04-1.el8.x86_64.rpm : No issues.
First, realize that the command showing at the install page is for the headless installation. It will probably work, but my preference is to use Dropbox with nautilus integration.
This instructions assumes a installation of Dropbox with Nautilus integration.
We need to compile the installer from source.
a. Download last package
wget https://linux.dropbox.com/packages/nautilus-dropbox-2020.03.04.tar.bz2
b. Extract tarball
tar xjf ./nautilus-dropbox-2020.03.04.tar.bz2
c. Try to compile
cd nautilus-dropbox-2020.03.04; ./configure;
Then you get an Error:
Erro:
Problema: conflicting requests
- nothing provides libgnome >= %{gnome_version} needed by nautilus-dropbox-2020.03.04-1.fc21.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Now we need to install nautilus-devel and python3-docutils
NOTE: You will get configure: error: couldn't find docutils if forget python3-docutils.
This command will enable the PowerTools repository and install what is needed:
dnf --enablerepo=PowerTools install nautilus-devel python3-docutils
Now you can run ./configure && sudo make install
That's it. Go for the start menu type "Dropbox", it will start the installer.
Restore a local backup of Dropbox (optional)
If you have a local backup, turn of the network after you see the Dropbox folder created. Then copy all your files to that folder and turn it on after copy.
This solution worked for me running CentOS Linux release 8.2.2004 (Core).

Cannot upgrade Arch Linux (pacman -Syu not working)

I am running sudo pacman -Syu on my Arch Linux and I am getting the following:
cristian#localhost:~$ sudo pacman -Syu
:: Synchronizing package databases...
core is up to date
extra is up to date
community is up to date
multilib is up to date
xenlism-arch is up to date
:: Starting full system upgrade...
resolving dependencies...
looking for conflicting packages...
error: failed to prepare transaction (could not satisfy dependencies)
:: package-query: requires pacman<4.3
What's the solution to fix this?
UPDATE
I have tried both solutions sugested by #jham. I have completely removed yaourt and package-query. At pacman -Qi pacman at 'required by' I have none. I also commented multilib and xenlism-arch from pacman.conf. When I do pacman -Syu I get the following:
:: Proceed with installation? [Y/n]
(244/244) checking keys in keyring [###################################] 100%
(244/244) checking package integrity [###################################] 100%
error: confuse: signature from "Thorsten Töpper <atsutane#freethoughts.de>" is unknown trust
:: File /var/cache/pacman/pkg/confuse-2.8-2-x86_64.pkg.tar.xz is corrupted (invalid or corrupted package (PGP signature)).
Do you want to delete it? [Y/n]
error: failed to commit transaction (invalid or corrupted package)
Errors occurred, no packages were upgraded.
I just had this very same error. The problem seems to be that there are new keys in the archlinux-keyring package, and new packages (confuse) signed with those keys. Since both packages are updated in the same transaction, well the new keys cannot be used until the update is finished, but the transaction will not start until the packages are checked...
The solution would be to update the archlinux-keyring by itself:
pacman -S archlinux-keyring
And then do the rest:
pacman -Su
If that fails, you could try running through the keys manually, with:
pacman-key --populate
but usually, it is not necessary.
I just happened to have the same problem, and solved it the following way:
$ sudo pacman -Rdd package-query # Purge the conflicting package-query
$ sudo pacman -Syu # There it works
# Now reinstall package-query
$ git clone https://aur.archlinux.org/package-query.git
$ cd package-query && makepkg -si
For anyone else coming in here that didn't find the solution by rorido working, try users Bernhard Fürst's or Jham's answer of just pacman -S package-query which worked for me without issues.
Also, if you are still getting issues like this with libalpm.so.8: cannot open shared object file: No such file or directory then you have to manually reinstall package-query and yaourt.
sudo pacman-db-upgrade
yaourt -R package-query yaourt
git clone https://aur.archlinux.org/package-query.git
cd package-query
makepkg -si
cd ..
git clone https://aur.archlinux.org/yaourt.git
cd yaourt
makepkg -si
cd ..
If you still have an error. Try this:
sudo find /var/cache/pacman/pkg/ -iname "*.part" -exec rm {} \;
It removes .part files, which causes the "invalid or corrupted package" error. After you remove them, run this:
sudo pacman -Syyu
This should solve the problem, if there is no any missing key.
I am using Manjaro and after a long search, I was able to fix the issue by following these simple commands.
NOTE: You must run pacman-key --init before first using pacman; the local
keyring can then be populated with the keys of all official Manjaro Linux
packagers with pacman-key --populate archlinux manjaro.
failed to prepare transaction (invalid or corrupted database)
Here it was due to the actual (faulty) mirror servers delivering junk.
comment out those standard servers and use a quality server e.g.
ftp://ftp5.gwdg.de/pub/linux/archlinux/community/os/x86_64/
Too bad that /etc/pacman.conf is so poorly commented, as if to deliberatly be unhelpful and feckless, always being vague, never concretely helpful.
this was the last error in an seemingly endless ordain of errors from pacman. keys are unmanagable, servers are a mess, libs spell chaos. Took me more than 1 day to get through this horrific Arch experience.
The following is for ArchLinux however applies to other Linux distros too.
To correct an invalid KEY one needs to do the following:
rm -fr /etc/pacman.d/gnupg
pacman-key --init
pacman-key --populate archlinux
say the key throwing an error is in Blackarch then is also needed to :
sudo pacman-key --populate blackarch
and finally
sudo pacman -Sy archlinux-keyring
sudo pacman-key --populate archlinux
sudo pacman-key --refresh-keys

Security plugin in Local yum repository

I've created a local yum repository for RHEL 7 on a separate server. Then I used the "reposync" command to get the packages from RHN.
reposync --download-metadata --gpgcheck -l --repoid=rhui-REGION-rhel-server-rh-common --download_path=/rhel_security_repo/
reposync --download-metadata --gpgcheck -l --repoid=rhui-REGION-rhel-server-optional --download_path=/rhel_security_repo/
reposync --download-metadata --gpgcheck -l --repoid=rhui-REGION-rhel-server-releases --download_path=/rhel_security_repo/
After that, I executed the following command to create my repo:
createrepo --database /rhel_security_repo/
The repository was created successfully with over than 9000 packages as expected. What I am trying to do now is to have other boxes using this local repository. I've created a yum config file in the other boxes where the baseurl points to this server with the local yum repository.
[security-updates-rhel7]
name=Repository for RHEL7 security updates
baseurl=ip-server
enabled=1
gpgcheck=1
All the servers are able to talk to this server with the local yum repo and they can install packages from it.
The problem is I can't update packages when I run yum update --security:
Example:
yum --disablerepo="*" --enablerepo="security-updates-rhel7" --security update mariadb-libs
Loaded plugins: amazon-id, rhui-lb
--> 1:mariadb-libs-5.5.37-1.el7_0.x86_64 from #rhui-REGION-rhel-server-releases removed (updateinfo)
--> 1:mariadb-libs-5.5.40-2.el7_0.x86_64 from security-updates-rhel7 removed (updateinfo)
No packages needed for security; 1 packages available
Resolving Dependencies
However, if I run the command without --security, I can see available updates:
yum --disablerepo="*" --enablerepo="security-updates-rhel7" update mariadb-libs
Loaded plugins: amazon-id, rhui-lb
Resolving Dependencies
--> Running transaction check
---> Package mariadb-libs.x86_64 1:5.5.37-1.el7_0 will be updated
---> Package mariadb-libs.x86_64 1:5.5.40-2.el7_0 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================
Updating:
mariadb-libs x86_64 1:5.5.40-2.el7_0 security-updates-rhel7 753 k
Transaction Summary
==============================================================================================================================
Upgrade 1 Package
Total download size: 753 k
Is this ok [y/d/N]:
It seems I lost the security metadata when I did the reposync.
Any ideas what the problem could be?
Etan has the right idea; those are the two ways we found to get around the issue with RHEL6. You could pull the Redhat metadata straight out of your yum cache and copy it into your local repo, and that works 90% of the time... but 10% it will give you random failures if Redhat happens to be updating a repository while you are syncing it.
Redhat has a guide for how to graft security metadata into your local repo in RHEL5/6, I assume it works similarly in RHEL7. If you have a Redhat Support account, see: https://access.redhat.com/solutions/55654
If you don't, below is my own take on it:
Run your reposync command with --download-metadata and other trimmings, but I would start with one repo at a time and put each one in it's own directory similar to how Redhat does it. ex. mkdir -p /path/to/repo-id && reposync -l -n --download-metadata -r repo-id -p /path/to/repo-id/.
Pull the abcdefghij-updateinfo.xml.gz metadata files from Redhat, which contains security metadata for each repository. Do this by running yum list-sec and then look for it in your local yum cache, under each repository's subdirectory. Probably somewhere in /var/cache/yum/arch/7Server/repo-id.
Run createrepo on just that repository. createrepo -v /path/to/repo-id/
Go into /path/to/repo-id/ and then into the repodata subfolder. Copy in the abcdefghij-updateinfo.xml.gz from your local yum cache into the repodata folder, but rename it to remove the hash at the beginning. Left with a file called updateinfo.xml.gz
Use the modifyrepo command to insert the security metadata into that repo's table of contents (repomd.xml) file.
modifyrepo /path/to/repo-id/updateinfo.xml.gz /path/to/repo-id/

Installing solr and indexing mysql

Can anyone help me with Installation of solr and configuring it to mysql table.I Have tried almost all tutorials , i tried with Jetty , also tomcat.Still getting errors like Data Handler not defined or could not find solr.It's been a week , i am trying all day
In order to get solr running, (assuming that you've downloaded solr and extract it to a location), just navigate to the jetty folder.
Under that there should be a start.jar.
Just type in java -jar start.jar - this should start Solr under jetty. As simple as that. For all my development purposes, I use this. I wouldn't worry about Tomcat unless the app is ready to be deployed to some server.
In order to get your SOLR instance to pull data from mysql, you need the DataImportHandler configured. This documentation describes it well.
EDIT:
A google search for "solr mysql import" lead me here. It is exactly what you're after, I suppose.
I also had the same issue and it is not easy to find simple tutorial for this. Anyway I found following tutorial and it was useful for me.
http://lasithtechavenue.blogspot.com/2013/11/crawling-mysql-database-with-apache-solr.html
Thanks
Hi Please take a look here.
https://github.com/vikash32/indexing-mysql-table-into-solr
i have tried to make it less messy.
Step1:
Login to Linux and go to root folder opt ie cd /opt/
Step2:
Dowload Solr-6.6.2 from the solr link and use the below command to download solr in linux
Sudo wget http://www-us.apache.org/dist/lucene/solr/6.6.2/solr-6.6.2.tgz
Step3:
Extract the service installation file
Sudo tar xzf solr-6.6.2.tgz solr-6.6.2/bin/install_solr_service.sh --strip-components=2
Step4:
Install solr as a service using the script
sudo bash ./install_solr_service.sh solr-6.6.2.tgz
Step5:To check solr server status
sudo service solr status
Step6:To Start Solr in Cloud mode in RHEL
Go to root directory ie cd /opt/
Then go to solr directory cd /solr
Opt/solr > sudo ./bin/solr start -c -force -s server/solr -p 8983 -z zk1:2181,zk2:2181,zk3:2181
Zk1 is the hostname or ipaddress
Step7: To Create Core on solr
Go to solr directory ie cd /opt/solr
Opt/solr > sudo ./bin/sor/create -c -p 8983,7574 -s 2 -rf 2
-s stands for no of shards
-rf stands for no of replica

how to set up the psql command in cygwin?

I have a local dev site on my machine with Apache server and PostgreSQL 9.1 database. As I'm using Windows, I also installed Cygwin. I want to access to database and make some queries via Cygwin insead of pgAdmin III, but it tells me that psql command not found. How should I set up the psql command in cygwin?
As of today, you just have to install postgresql-client package in cygwin:
Run your cygwin setup.exe file (this can be run multiple times to
add more packages).
Type postgresql into the search box, select postgresql-client and
press "next" to install.
Now you can open Cygwin terminal and type psql to run!
The best combo for Cygwin on Windows, I've found, is the normal Windows Postgres installation combined with Cygwin psql.
Cygwin psql (and other command-line tools) can be compiled from source fairly easily. Here's the steps for 9.2.4:
$ wget http://ftp.postgresql.org/pub/source/v9.2.4/postgresql-9.2.4.tar.bz2
$ tar xjf postgresql-9.2.4.tar.bz2
$ cd postgresql-9.2.4/
$ ./configure
$ cd src/bin/psql
$ make
This creates a psql.exe binary that works well with Cygwin. However, by default, it tries to connect to the local instance using a Unix socket instead of TCP. So use -h to specify the hostname and force TCP, for example:
$ ./psql -h localhost -U postgres
Move this psql.exe to someplace on your path (e.g. ~/bin) and possibly wrap in a script to add '-h localhost' for convenience when no other arguments supplied.
The source could be modified to change the default, but that takes actual work ;)
If I understand your question correctly you are running cygwin because you want to run queries against PostgreSQL via bash and psql on Windows, right?
Cygwin can run Windows binaries from bash, so install the native Windows builds and make sure psql.exe is in the PATH You should be able to copy the executable if necessary.
There is no need to install a native Cygwin build of PostgreSQL. Just use the existing psql tool, and make sure you can access the Windows-native psql.exe.