Reindexing Zebra Because there are no results found in opac and intranet search in KOHA Integrated Library System - indexing

I have a freshly installed Koha 3.16 in a Debian Server. I already imported the MARC records into the catalog, but when I search it on Opac , there is no results found.
I read this link: My Zebra Indexing won’t work! How do I fix it? (AKA: I search for stuff and nothing comes up! Help!)
I have tried to follow what has been said in this link. But unfortunately Im stuck with the third step.
export PERL5LIB=/usr/share/koha/lib
export KOHA_CONF=/usr/share/koha/koha-conf.xml
/usr/share/koha/bin/migration_tools/rebuild_zebra.pl -b -r -v -x
I run this commands, but still, no luck. I tried to rebuild zebra using the command:
sudo koha-rebuild-zebra -f -v mylibrary
The result shows:
Zebra configuration information
================================
Zebra biblio directory = /var/lib/koha/mylibrary/biblios
Zebra authorities directory = /var/lib/koha/mylibrary/authorities
Koha directory = /usr/share/koha/intranet/cgi-bin
Lockfile = /var/lock/koha/mylibrary/rebuild/rebuild..LCK
BIBLIONUMBER in : 999$c
BIBLIOITEMNUMBER in : 999$d
================================
skipping authorities
====================
exporting biblio
====================
Records exported: 7922
====================
REINDEXING zebra
====================
18:04:12-13/11 zebraidx(8862) [warn] zebra_lock_create fail fname=/var/lock/koha/mylibrary/biblios/norm..LCK [No such file or directory]
18:04:12-13/11 zebraidx(8862) [warn] zebra_lock_create fail fname=/var/lock/koha/mylibrary/biblios/shadow..LCK [No such file or directory]
18:04:12-13/11 zebraidx(8862) [fatal] Could not select database biblios errCode=109
18:04:12-13/11 zebraidx(8863) [warn] zebra_lock_create fail fname=/var/lock/koha/mylibrary/biblios/norm..LCK [No such file or directory]
18:04:12-13/11 zebraidx(8863) [warn] zebra_lock_create fail fname=/var/lock/koha/mylibrary/biblios/shadow..LCK [No such file or directory]
18:04:12-13/11 zebraidx(8863) [fatal] Could not select database biblios errCode=109
====================
CLEANING
====================
Zebra configuration information
================================
Zebra biblio directory = /var/lib/koha/mylibrary/biblios
Zebra authorities directory = /var/lib/koha/mylibrary/authorities
Koha directory = /usr/share/koha/intranet/cgi-bin
Lockfile = /var/lock/koha/mylibrary/rebuild/rebuild..LCK
BIBLIONUMBER in : 999$c
BIBLIOITEMNUMBER in : 999$d
================================
====================
exporting authority
====================
Records exported: 0
====================
REINDEXING zebra
====================
skipping biblios
====================
CLEANING
====================
Whats wrong with reindexing Zebra? How will I fix it? I found a problem same with me, but honestly I dont get how to apply a Patch.Here's the link:
zebraidx errCode=109
Please fix it. I really have to get this working.

Happened the same when install koha in Centos. Try:
Koha-rebuild-zebra -f -v -b mylibrary
You can use these lines to restart the search tables.
sudo zebraidx -c /etc/koha/sites/library/zebra-biblios.cfg drop biblios
sudo zebraidx -c /etc/koha/sites/library/zebra-biblios.cfg commit
koha-rebuild-zebra -b -r -v mylibrary
Finally: Try to change the parameter SearchEngine in the preferences section. Zebra to Solr.
Other parameters (source):
-f, --full Does a reindex of the whole collection. Will run even if USE_INDEXER_DAEMON=yes.
-a, --authorities Only run the indexing process for authority records.
-b, --biblios Only run the indexing process for biblio records.
-q, --quiet Sometimes be a bit quieter for scripts/cronjobs.
-v, --verbose Be verbose. Useful for debugging indexing problems.

Hi your all previous step looks great
after all the above steps as followed by user till /usr/share/koha/bin/migration_tools/rebuild_zebra.pl -b -r -v -x looks fine
after that you give the following command
zebrasrv -f /path/to/the.KOHA_CONF file i.e xml file which should run in background process else if you get logout indexing also go off

I have a similar problem, zebraserv seems to exit randomly. The log is not much help, as it does not capture what caused the crash. The symptom is that searches don't work. I found that restarting koha fixes the issue. However restarting all the time is not ideal. Some investigation on my system ( Debian 8.3 with Koha 16.05.05.000 ) reveal that the zebraserv process dying is a symptom of the issue. I wrote this script to be run as a cron job ( for root ) that runs every 60 seconds. This appears to make it recover. It seems to happen once every few days.
This has been happening in the last few releases of koha-common via Debian apt-get
I called it /root/check_zebra.sh ..... it is
:
# Add this to cron i.e ( remove the leading "#" ) for root
# * * * * * /root/check_zebra.sh >> /root/check_zebra.log
#
ps ax | grep zebrasrv | grep koha-conf.xml > /dev/null
status=$?
if [ $status = "0" ]
then
:
else
# Restart
echo "============================================="
date
echo "zebrasrv has stopped. Restarting Koka..."
echo "/etc/init.d/koha-common restart"
/etc/init.d/koha-common restart
fi

Related

DSpace 7.1 AIP restore StackOverflowError

I try to migrate from DSpace 6.4 to 7.1. New Dspace is installed on other machine (virtual machine on Centos 7 with 8Gb of RAM)
I have created full site AIP backup with user passwords. (total size of packages - 11Gb)
I've tried to do full restore but always got the same error.
So I'm just trying to import only "first level without childs"
JAVA_OPTS="-Xmx2048m -Xss4m -Dfile.encoding=UTF-8" /dspace/bin/dspace packager -r -k -t AIP -e dinkwi.test#gmail.com -o skipIfParentMissing=true -i 123456789/0 /home/dimich/11111/repo.zip
It doesn't matter if I use -k or -f param, output ia always the same
Ingesting package located at /home/dimich/11111/repo.zip
Exception: null
java.lang.StackOverflowError
at org.dspace.eperson.GroupServiceImpl.getChildren(GroupServiceImpl.java:788)
at org.dspace.eperson.GroupServiceImpl.getChildren(GroupServiceImpl.java:802)
.... (more then 1k lines)
at org.dspace.eperson.GroupServiceImpl.getChildren(GroupServiceImpl.java:802)
my dspace.log ended with
2021-12-20 11:05:28,141 INFO unknown unknown org.dspace.eperson.GroupServiceImpl # dinkwi.test#gmail.com::update_group:group_id=9e6a2038-01d9-41ad-96b9-c6fb55b44381
2021-12-20 11:05:30,048 INFO unknown unknown org.dspace.eperson.GroupServiceImpl # dinkwi.test#gmail.com::update_group:group_id=23aaa7e9-ca2d-4af5-af64-600f7126e2be
2021-12-20 11:05:30,800 INFO unknown unknown org.springframework.cache.ehcache.EhCacheManagerFactoryBean # Shutting down EhCache CacheManager 'org.dspace.services'
So I just want to figure out the problem: small stack or some bug in user/group that fails with infinite loop/recursion, or maybe something else...
Main problem - i'm good in php/mysql and have no experience with java/postgre and the way how to debug this ...
Any help would be appreciated.
p.s after failed restore I always run command
/dspace/bin/dspace cleanup -v

Solr throwing 'org.apache.commons.exec.ExecuteException' on Start

I am setting up Solr for Centos 7. When i try to execute any sample project for ex. using ./bin/solr start -e techproducts, its throwing error of 'Exception : org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1)' . How to fix this. Java is installed under '/usr/bin/java'
I tried with changing Java home directory path
./bin/solr start -e techproducts
*** [WARN] *** Your Max Processes Limit is currently 31165.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
Solr home directory /tmp/solr-7.7.2/example/techproducts/solr already exists.
Starting up Solr on port 8983 using command:
"bin/solr" start -p 8983 -s "example/techproducts/solr"
*** [WARN] *** Your Max Processes Limit is currently 31165.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
WARNING: Starting Solr as the root user is a security risk and not considered best practice. Exiting.
Please consult the Reference Guide. To override this check, start with argument '-force'
ERROR: Failed to start Solr using command: "bin/solr" start -p 8983 -s "example/techproducts/solr" Exception : org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1)
Credit for the answer should go to #MatsLindh (see comment above). I'm adding it here for others who may experience the same problem.
The issue is that Solr doesn't want you to run it as a root user - or what it thinks is a root user (I've seen this issue with WSL as well and one isn't running as root).
Using root is a security risk but you can force Solr to start anyways by adding -force to the command, for example:
bin/solr start -e cloud -force

Using "Remote SSH" in VSCode on a target machine that only allows inbound SSH connections

Is there a way to use the VSCode Remote SSH extension to interact with a remote host that does not allow outbound internet connections?
Is it possible to download the vscode-server files from another system and copy to host?
I read this but I can't connect the server to internet.
When you connect to a host it executes a bash script that wgets or curls a tarball and extracts it in a directory in your home directory. Here's an offline workaround.
Attempt to connect, let it fail
On server, get the commit id
$ ls ~/.vscode-server/bin
553cfb2c2205db5f15f3ee8395bbd5cf066d357d
Download tarball replacing $COMMIT_ID with the the commit number from the previous step
For Stable Version
https://update.code.visualstudio.com/commit:$COMMIT_ID/server-linux-x64/stable
For Insider Version
https://update.code.visualstudio.com/commit:$COMMIT_ID/server-linux-x64/insider
Move tarball to ~/.vscode-server/bin/$COMMIT_ID/vscode-server-linux-x64.tar.gz
Extract tarball in this directory
$ cd ~/.vscode-server/bin/$COMMIT_ID
$ tar -xvzf vscode-server-linux-x64.tar.gz --strip-components 1
Connect again
You'll still need to install any extensions manually. There's a download button next to all the extensions in the marketplace. Once you have the .vsix file you can install them through the GUI with the Install from VSIX option in the extensions manager.
This is kind of a pain and hopefully they improve this process, but if you have a network-based home directory, you only have to do this once.
open vscode -> about
Version: 1.46.1
Commit: cd9ea6488829f560dc949a8b2fb789f3cdc05f5d
Date: 2020-06-17T21:17:14.222Z
Electron: 7.3.1
Chrome: 78.0.3904.130
Node.js: 12.8.1
V8: 7.8.279.23-electron.0
OS: Darwin x64 17.7.0
$COMMIT_ID = cd9ea6488829f560dc949a8b2fb789f3cdc05f5d
A new feature is being added to support offline install
However, you can now solve this issue by a new user setting in the Remote - SSH extension. If you enable the setting remote.SSH.allowLocalServerDownload, the extension will install the VS Code Server on the client first and then copy it over to the server via SCP.
Note: This is currently an experimental feature but will be turned on by default in the next release
https://code.visualstudio.com/blogs/2019/10/03/remote-ssh-tips-and-tricks
A a work around I have done the following:
Desktop ~/.ssh/config
...
Host *
RemoteForward 54321
...
Remote: ~/bin/wget in which ~/bin is added to PATH via .bashrc
#!/bin/bash
export LD_LIBRARY_PATH=$HOME/opt/lib/tsocks/
export TSOCKS_CONF_FILE=$HOME/opt/tsocks/tsocks.conf
$HOME/bin/tsocks /usr/bin/wget $#
Remote: ~/opt/tsocks/tsocks.conf
server = 127.0.0.1
server_port = 54321
server_type = 5
note tsocks binary has been scp-ed to ~/bin/tsocks and ~/opt/tsocks/ has been created with libtsocks.so which is normally stored in /usr/lib64/libtsocks.so
This is a work around that allows me to have wget functionality with out messing with anything outside my profile to get it to work (eg: no root required ... even though I have it).
Current Version of VS Code: 1.48.2
I just kill the wget process on the server end, and let the client download the archive and transfer it to the server end. That's quite easy as below.
make sure that you set in settings.json
"remote.SSH.allowLocalServerDownload": true,
execute the shell scrpits below.
# to find the <pid>
ps aux | grep wget | grep vscode-server
# kill the process
kill -9 <pid>
# then wait for the client downloading and transferring
# optional: If you want to know the progress, just
cd ~/.vscode-server/bin/<commit-id>/
watch -n 1 -d ls -rthl

I m trying to integrate ldap with devstack and when i did ./stack.sh i got this localrc: line 9: KEYSTONE_IDENTITY_BACKEND: command not found

localrc file
ADMIN_PASSWORD=password2 MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2 SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes LDAP_PASSWORD=9632
I followed this website(http://www.ibm.com/developerworks/cloud/library/cl-ldap-keystone/)
I am assuming the above snippet is from a file written in shell script. Your example looks Ok.
I checked the link you provided and noted that the line you say failed is written in the IBM example as:
KEYSTONE_IDENTITY_BACKEND = ldap
Which is not legal sh (or bash) and would cause the error message you described.
KEYSTONE_IDENTITY_BACKEND = ldap
-bash: KEYSTONE_IDENTITY_BACKEND: command not found
I suspect you copied and pasted the bad example from the link into your localrc file, which caused the error you saw, but somehow when you wrote the SO question, you corrected the mistake by removing the spaces around the "=".
Edit: Investigation
;TLDR
Create a file in the root of the devstack repo, devstack/local.conf with the contents:
[[local|localrc]]
ADMIN_PASSWORD=password2
MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2
SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes
LDAP_PASSWORD=9632
Full Description
I installed devstack on Centos7 (using the Devstack Quick Start Guide):
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
./stack.sh
I entered passwords as prompted, but eventually it failed with the error:
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
I traced the problem to a limited PATH in the sudoers entry, and because my postgreSQL install is in a non-standard location, I linked pg_config into /usr/local/bin and ran stack.sh again:
sudo ln -s /usr/pgsql-9.3/bin/pg_config /usr/local/bin/pg_config
./stack.sh
(You probably won't have to do this if Postgres is in a standard location).
Install took a long time -
This is your host IP address: 192.168.200.181
This is your host IPv6 address: ::1
Horizon is now available at http://192.168.200.181/dashboard
Keystone is serving at http://192.168.200.181/identity/
The default users are: admin and demo
The password: 12345678
2016-07-17 18:16:32.834 | WARNING:
2016-07-17 18:16:32.834 | Using lib/neutron-legacy is deprecated, and it will be removed in the future
2016-07-17 18:16:32.834 | stack.sh completed in 1447 seconds.
I killed the devstack session and did it all again with a clean git repo and with a localrc file.
./unstack.sh
cd ..
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
cat << __EOF > local.conf
[[local|localrc]]
ADMIN_PASSWORD=password2
MYSQL_PASSWORD=password2
RABBIT_PASSWORD=password2
SERVICE_PASSWORD=password2
SERVICE_TOKEN=token2
ENABLED_SERVICES=key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,ldap
KEYSTONE_IDENTITY_BACKEND=ldap
KEYSTONE_CLEAR_LDAP=yes
LDAP_PASSWORD=9632
__EOF
./stack.sh
This time there were no password prompts, so the local config was definitely read.

Why doesn't setting the SUID bit in OpenBSD set effective and saved UIDs to executable file owner?

I am using a fresh install of OpenBSD 5.3 as a guest OS on Parallels for Mac:
$ uname -a
OpenBSD openbsd.localdomain 5.3 GENERIC#53 amd64
To my surprise, a binary file owned by root with its SUID bit set runs with UIDs as if the SUID was not set. That is, when UID 1000 runs such a program, the program starts in state:
<real_uid, effective_uid, saved_uid> = <1000, 1000, 1000>
and not in state:
<real_uid, effective_uid, saved_uid> = <1000, 0, 0>
as expected.
Why is this the case?
Here are the details regarding how I found the issue:
I have written an interactive C program (compiled as setuid_min.bin) for evaluating setuid behaviour in different Unix systems. The program lives in a subdirectory of UID 1000's home directory, and the sudo command is used to change ownership and SUID; then the program is run and I enter the uid to report the real, effective, and saved UIDs of the process:
$ sudo chown root:staff setuid_min.bin
$ ls -l | grep 'setuid_min\.bin$'
-rwxr-xr-x 1 root staff [...] setuid_min.bin
$ sudo chmod a+s setuid_min.bin
$ ls -l | grep 'setuid_min\.bin$'
-rwsr-sr-x 1 root staff [...] setuid_min.bin
$ ./setuid_min.bin
uid
1000 1000 1000 some_pid
exit
$
Note that some_pid above is the pid of the setuid_min.bin process. The program reports the real UID, effective UID, and saved UID by reporting the output of the following shell command:
ps -ao ruid,uid,svuid,pid | grep '[ ]my_pid$'
where my_pid is the pid is reported by getpid(). My only guess as to why this might be the case is that OpenBSD has some underlying permissions structure that is using the ownership/permissions of the directory where setuid_min.bin resides, or that is not actually changing ownership/SUID bit when an unprivileged user uses sudo to change file permissions.
Most likely your binary is in one of the default partitions that are mounted "nosuid". The default fstab the install script creates will by mount everything nosuid unless it's known to contain suid binaries.