Unable to get data from PostgreSQL using mod_lua - apache

This is my setup:
OS: Linux Ubuntu 14.04 64bit
DB: Postgres 9.4 (installed from official Postgres repo)
Apache: 2.4.7 with mod_lua manually compiled from source in Apache 2.4.20 and installed
Database init script is as follows:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
client VARCHAR(20) NOT NULL UNIQUE,
secret VARCHAR(20) NOT NULL
);
INSERT INTO users(client, secret) VALUES('john' , 'john' );
INSERT INTO users(client, secret) VALUES('apple', 'orange');
INSERT INTO users(client, secret) VALUES('kiwi' , 'pear' );
INSERT INTO users(client, secret) VALUES('peach', 'berry' );
Apache has enabled mod_dbd that is configured as follows:
<IfModule dbd_module>
DBDriver pgsql
DBDPersist on
DBDMax 20
DBDParams "host='localhost' port='5432' dbname='demousers' user='postgres' password='postgres'"
DBDPrepareSQL 'SELECT u.secret FROM users u WHERE u.client=%s' client_secret
</IfModule>
There is also mod_lua which is configured like this:
<IfModule lua_module>
LuaRoot /vagrant/luatest
LuaScope thread
LuaCodeCache stat
LuaHookAccessChecker /vagrant/luatest/cookie_handler.lua handler early
LuaHookAccessChecker /vagrant/luatest/handler.lua handler late
</IfModule>
This is the sample code that I'm trying to execute in handler.lua that fails:
require "string"
require "apache2"
local inspect = require "inspect"
function handler(r)
local db, err = r:dbacquire()
if not db then
r:debug("[500] DB Error: " .. err)
return 500
end
r:debug("Acquired database")
local statement, errmsg = db:prepared(r, "client_secret")
if not statement then
r:debug("[500] DB Error: " .. errmsg)
db:close()
return 500
end
r:info("Acquired prepared statement")
local secret
local result, emsg = statement:select("john")
if not emsg then
r:info("Fetch rows")
local rows = result(0, true)
r:debug("Rows " .. inspect(rows))
for k, row in pairs(rows) do
r:info("Pass " .. k .. inspect(row))
if row[1] then
secret = string.format("%s", row[1])
end
end
else
r:debug( "Error : " .. emsg)
end
db:close()
return 403
end
Looking at the postgres sql log I see that the query was correctly executed and parameter passed. The issue is that I get the record that has no values just nil placeholders in lua table - rows is like this
{ {} }
So, is this a bug or my mistake?

Unfortunately it is a bug. For details see:
https://bz.apache.org/bugzilla/show_bug.cgi?id=56379

This is the solution that fixes the problem
# Install Apache 2.4.10 from backports repo that has working mod_lua
sudo apt-get install -y -t trusty-backports apache2 apache2-dev apache2-utils
sudo apt-get install -y libaprutil1-dbd-pgsql
# Install lua dependencies
sudo apt-get build-dep -y lua5.1
sudo apt-get install -y lua5.1
sudo apt-get install -y liblua5.1-0-dev
# Get the source code
APACHEVER='2.4.10'
cd /tmp/
apt-get source -y apache2="$APACHEVER"
mv "apache2-$APACHEVER/modules/lua/lua_dbd.c" "apache2-$APACHEVER/modules/lua/lua_dbd.c_original"
# Applying the patch for https://bz.apache.org/bugzilla/show_bug.cgi?id=56379
# without it dbd + lua + postgres do not work - you need to previously prepare the patched c file
cp -u /vagrant/lua_dbd.c "apache2-$APACHEVER/modules/lua/"
cd "apache2-$APACHEVER/modules/lua"
# Build and install patched lua module
sudo apxs -I/usr/include/lua5.1 -cia mod_lua.c lua_*.c -lm -llua5.1
sudo service apache2 restart

Related

Error when running API call in R using comprador() package

I get this error when I try to run an API call using ct_search() from comtradr() package in R .
Error in curl::curl_fetch_memory(url, handle = handle) :
SSL certificate problem: certificate has expired
Any ideas?
You haven't given enough details, but it could be related to this:
https://support.sectigo.com/articles/Knowledge/Sectigo-AddTrust-External-CA-Root-Expiring-May-30-2020
If you are on a Linux machine that you are running curl from, you can do the following:
$ sudo vi /etc/ca-certificates.conf
add an exclamation point in front of the line that says "mozilla/AddTrust_External_Root.crt" and save the file
$ sudo apt update
$ sudo apt install ca-certificates
$ sudo update-ca-certificates -f -v

Connect Raspberry Pi to SQL Server using pyodbc error: [08001] [FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)

I'm trying to connect a raspberry pi 3 to a local SQL Server.
I did this:
sudo apt-get install unixodbc
sudo apt-get install unixodbc-dev
sudo apt-get install freetds-dev
sudo apt-get install tdsodbc
sudo apt-get install freetds-bin
sudo pip3 install pyodbc
sudo apt-get install python-pyodbc
sudo nano /etc/freetds/freetds.conf
Added this block
[sqlserver]
host = 192.168.0.109 # Sql Server's IP addr
port = 1433 # default
tds version = 7.0 #
instance = Database1 # Database name
now in /etc/odbcinst.ini
[FreeTDS]
Description = FreeTDS unixODBC Driver
Driver = /usr/lib/arm-linux-gnueabihf/odbc/libtdsodbc.so
Setup = /usr/lib/arm-linux-gnueabihf/odbc/libtdsodbc.so
UsageCount = 1
and in /etc/odbc.ini file as follows:
[NAME1]
Driver = /usr/lib/arm-linux-gnueabihf/odbc/libtdsodbc.so
Description = MSSQL Server
Trace = No
Server = ServerName1
Database = Database 1
Port = 1433
TDS_Version = 7.4
when I run
tsql -S sqlserver -U username
I can connect t the database and run queries, but When I try
tsql isql NAME1 user 'password'
I get
[ISQL]ERROR: Could not SQLConnect
I got a python script with
class SQL:
cnxn = None
cursor= None
def __init__(self):
try:
self.cnxn = pyodbc.connect('DRIVER={FreeTDS}; SERVER= ws2016_01; DATABASE=databasename; UID=user; PWD=password;TDS_Version=7.2;')
self.cnxn.setdecoding(pyodbc.SQL_CHAR, encoding='utf-8')
self.cnxn.setdecoding(pyodbc.SQL_WCHAR, encoding='utf-8')
self.cnxn.setencoding(encoding='utf-8')
And I keep getting the error
[08001] [FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)
Thanks for reading, any help would be deeply appreciated!

I would like to set up rfc5766-turn-server in Ubuntu 14.04, can anyone give me the set of steps listed all together ? I am doing it in AWS EC2

I have tried to install and set up rfc5766-turn-server in AWS EC2 but unable to do it as I do not see a proper flow of work or command line for that, can someone help me about this ? I need to set it up in Ubuntu 14.04
do an ssh login to your ec2 instance, then run the below commands for installing and starting the turn server.
commands for installing turnserver:
sudo apt-get update
sudo apt-get install make gcc libssl-dev libevent-dev wget -y # for installing modules required by turn server
mkdir ~/turn && cd ~/turn # creating temp directory
wget turnserver.open-sys.org/downloads/v3.2.5.9/turnserver-3.2.5.9.tar.gz # downloading the TURN source code
tar -zxvf *.gz # extract
cd turn*
make
sudo make install # installing the rfc5766
cd ../.. && rm -rf turn # cleaning up
command for starting the TURN server:
turnserver -a -o -v -n -u user:root -p 3478 -L INT_IP -r someRealm -X EXT_IP/INT_IP
assumptions:
your ip, internal ip = EXT_IP, INT_IP
desired port for listening: 3478
single credential username:password = user:root
realm: someRealm
in your WebRTC app, you can use trun server like:
{
url: 'turn:user#EXT_IP:3478',
credential: 'root'
}

"no such file or directory" when running Docker image

I'm new to Docker and trying to create an image with owncloud 7 on centos 6.
I've created a Dockerfile. I've built an image.
Everything goes right except that when I run the image :
docker run -i -t -d -p 80:80 vfoury/owncloud7:v3
I get the error :
Cannot start container a7efd9be6a225c19089a0f5a5c92f53c4dd1887e8cf26277d3289936e0133c69:
exec: "/etc/init.d/mysqld start && /etc/init.d/httpd start":
stat /etc/init.d/mysqld start && /etc/init.d/httpd start: no such file or directory
If I run the image with /bin/bash
docker run -i -t -p 80:80 vfoury/owncloud7:v3 /bin/bash
then I can run the command
/etc/init.d/mysqld start && /etc/init.d/httpd start
and it works.
Here is my Dockerfile content :
# use the centos6 base image
FROM centos:centos6
MAINTAINER Vincent Foury
RUN yum -y update
# Install SSH server
RUN yum install -y openssh-server
RUN mkdir -p /var/run/sshd
# add epel repository
RUN yum install epel-release -y
# install owncloud 7
RUN yum install owncloud -y
# Expose les ports 22 et 80 pour les rendre accessible depuis l'hote
EXPOSE 22 80
# Modif owncloud conf to allow any client to access
COPY owncloud.conf /etc/httpd/conf.d/owncloud.conf
# start httpd and mysql
CMD ["/etc/init.d/mysqld start && /etc/init.d/httpd start"]
Any help would be greatly appreciated
Vincent F.
After many tests, here is the Dockerfile that works to install ouwncloud (without MySQL):
# use the centos6 base image
FROM centos:centos6
RUN yum -y update
# add epel repository
RUN yum install epel-release -y
# install owncloud 7
RUN yum install owncloud -y
EXPOSE 80
# Modif owncloud conf to allow any client to access
COPY owncloud.conf /etc/httpd/conf.d/owncloud.conf
# start httpd
CMD ["/usr/sbin/apachectl","-D","FOREGROUND"]
then
docker build -t <myname>/owncloud
then
docker run -i -t -p 80:80 -d <myname>/owncloud
then you should be able to open
http://localhost/owncloud
in your browser
I think this is because you're trying to use && within the Dockerfile CMD instruction.
If you intend to run multiple services within a Docker container, you may want to check Supervisor. It enables you to run multiple daemons within the container. Check the Docker documentation at https://docs.docker.com/articles/using_supervisord/.
Alternatively you could ADD a simple bash script to start the two daemons and then set the CMD to use the bash file you added.
The issue is that your CMD argument contains shell operations, but you're using the exec-form of CMD instead of the shell-form. The exec-form passes the arguments to one of the exec functions, which will not interpret the shell operations. The shell-form passes the arguments to sh -c.
Replace
CMD ["/etc/init.d/mysqld start && /etc/init.d/httpd start"]
with
CMD /etc/init.d/mysqld start && /etc/init.d/httpd start
or
CMD ["sh", "-c", "/etc/init.d/mysqld start && /etc/init.d/httpd start"]
See https://docs.docker.com/reference/builder/#cmd.

How to configure puppet so that it installs yum repos with debug mode?

When i run puppet apply, it tries to install packages using the following command:
/usr/bin/yum -d 0 -e 0 -y install couchdb-1.2.0-7.el6
How can i configure so that it runs it as following instead:
/usr/bin/yum -y install couchdb-1.2.0-7.el6
That is, without removing the debug logs?
You could create a module with an exec resource in it.
> exec {
>
> "couchdb":
> command => "/usr/bin/yum -y -d 0 install couchdb-1.2.0-7.el6",
> path => "/usr/local/bin/:/bin/",
>
> }
as a test I did an update to my wget. Before running the module wget was at 1.11.4-2.el5. In my repository I had 1.11.4-3.el5_8.1.
Here are the results of my 'yum update list wget.x86_64':
Installed Packages
wget.x86_64 1.11.4-2.el5 installed
Available Packages
wget.x86_64 1.11.4-3.el5_8.1 update
this is my puppet output after applying the class (with a debug option to show you the ouput):
debug: Executing '/usr/bin/yum -y -d 0 update wget.x86_64' notice:
/Stage[main]/Yum-update-test/Exec[wget]/returns: executed successfully
And this is the output of the 'yum update list wget.x86_64' after the class/module was applied:
Installed Packages
wget.x86_64 1.11.4-3.el5_8.1 installed
While waiting for a real fix thru this ticket:
https://tickets.puppetlabs.com/browse/PUP-3453
Your only solution is to modify directly the yum package provider:
/usr/lib/ruby/site_ruby/1.8/puppet/provider/package/yum.rb
def install
wanted = #resource[:name]
# If not allowing virtual packages, do a query to ensure a real package exists
unless #resource.allow_virtual?
yum *['-d', '0', '-e', '0', '-y', install_options, :list, wanted].compact
end
Change the '-d' value to 10 and you'll be done
If you provide yum the -d or -e options multiple times, it will use the most recent values. So, you can also use install_options on your package resources. For example:
package { 'wget':
install_options => ['-d' , '10' , '-e' , '1' , '-v'],
}
your puppet log will then include something like:
2017-10-19 14:02:48 +0000 Puppet (debug): Executing: '/usr/bin/yum -d 0 -e 0 -y -d 10 -e 1 -v install wget'
... and all of the debug output.