option to add new ZSQL method is not shown in Zope 2.13.6 - zope

When I installed ZSQL explicitly for Zope 2.13.6, I thought it will be easier to connect RDBMS. But option for adding ZSQL method from ZMI (management interface) is not shown. I have used build out to install ZSQL package. As this package comes as default in prior releases, i do not have to face this problem. Also no specific information is found for installing ZSQL package in Zope.
Kindly give the solution.
Regards,
CPK

"Works for me"::
$ /opt/Python-2.7.0/bin/virtualenv --no-site-packages /tmp/zsql
New python executable in /tmp/zsql/bin/python
Installing setuptools............................done.
$ /tmp/zsql/bin/easy_install Zope2==2.13.6 Products.ZSQLMethods==2.13.4
...
Finished processing dependencies for Products.ZSQLMethods==2.13.4
$ /tmp/zsql/bin/mkzopeinstance -u admin:123 -d /tmp/zsqlinst
$ /tmp/zsqlinst/bin/zopectl fg
...
2011-07-05 11:35:53 INFO Zope Ready to handle requests
The "Z SQL Method" option is in the ZMI add list. Selecting it shows a page which says,
"There are no SQL database connections. You need to add a Zope SQL database connection before
you can create a Zope SQL Method." After adding a package for my SQL backend, e.g.:
$ /tmp/zsql/bin/easy_install Products.ZMySQLDA
...
Finished processing dependencies for Products.ZMySQLDA
and restarting Zope, I added a "Z MySQL Database Connection", and was then able to add
and successfully test a "Z SQL Method" against the connection.

Add eggs in the product: Products.ZSQLMethods, but a warning, I think you still get errors: (I'm also trying to solve, no new post here.
Cleber J Santos

You need to have a Zope Database Adapter installed, and possibly a python DBAPI module, before that option is available.
My preference is to use SQLAlchemyDA for the database adapter and an appropriate DBAPI (I use cx_oracle for Oracle and psycopg2 for PostGreSQL, but SQLAlchemyDA supports most relational databases)

Related

How to run Odoo with OCA repositories' modules in Odoo.sh?

I am testing Odoo.sh, trying to run an Odoo 15 Enterprise. I read all the documentation and see several webinars about it, but I am not able to run an instance with any OCA module.
To do that, I followed these steps:
In the Odoo.sh interface, I created a new branch in the Development category, forking from main branch (the one in the Production category). Note: the main branch is the one created by default by Odoo.sh, I didn't make any modification on it and in fact it works OK, I can connect to it.
Also in the Odoo.sh interface, I clicked on the button Submodule and then on Run on Odoo.sh. In the opened pop-up, I added the OCA repository l10n-spain, (version 15.0 of course). The repository works perfectly in a local server. In fact you can try with other OCA repository, the result is going to be the same.
After doing that, Odoo.sh adds the repo to the project with a new [ADD] commit, and tries to make a build of it. However, the tests always fail.
If I go to the log, first, in the install.log section, I can see errors with Pip libraries, so I open a shell and try to fix them, with pip3 check and then adjusting the versions of the libraries it complains of.
After that, when I try to connect to the new build, the odoo.log starts being filled but also with errors, particularly this one:
WARNING xxx odoo.addons.base.models.ir_cron: Tried to poll an undefined table on database xxx.
ERROR xxx odoo.sql_db: bad query:
SELECT latest_version
FROM ir_module_module
WHERE name='base'
ERROR: relation "ir_module_module" does not exist
LINE 3: FROM ir_module_module
^
This error uses to appear when you do a wrong installation of Odoo, but the installation is done by Odoo.sh, so... how can I fix this?
Does anyone experienced the same? Any ideas? May be the Python libraries are the problem?
One problem can be that the requirements file brokest the installation. odoo.sh tries to install it automatically, and because odoo.sh is using outdated python modules, the installation usually breaks.
https://github.com/OCA/l10n-spain/blob/15.0/requirements.txt
You can try to copy the required modules directly to your repository.
Well, in the end I managed to connect to the build after open a shell and writing these commands:
odoosh-restart http
odoo-update all
Still didn't check which of them did the trick.

create or replace PACKAGE BODY not honored

The command create or replace PACKAGE ... is not being honored.
I have had numerous times where if I use # command in a driver file on a Package Body or Spec, where the old version of the code remains intact. The new version doesn't get uploaded. This has even happened when I drop the package in question, and use #THIS_PACKAGE_SPEC.SQL. The old package will come back. The error logs show the package loaded and compiled normally.
One way to deal with this is to load the THIS_PACKAGE_SPEC.SQL into developer and execute it by itself.
Another is to drop the old package, shut down and restart Oracle SQL Developer then proceed normally.
This problem has exhibited itself using both Oracle SQL Developer, and SQLPlus. In Unix and Linux environments.
Anyone else experience this?

Why the installing process of R package "RODBC" in "R CMD INSTALL" can't find ODBC driver manager?

I am trying to connect to an Vertica DB from R using "RODBC" package. Also, the machine I am using is an remote server which doesn't have direct internet access so I basically "transfer" all source files from my local to the remote server to build the system. So, in order to give you an clear context, I am listing all my steps in attending of installing "RODBC" package below:
Step1 - I downloaded the RODBC_1.3-13.tar.gz source file for RODBC and then tried to directly install it with "R CMD INSTALL". However, I encountered error as "ODBC headers sql.h and sqlext.h not found".
Step2 - After a few researches, I found that the installation of "unixodbc-dev" would potentially solve this issue. Therefore, I downloaded all needed dependencies for "unixodbc-dev" and transferred them to the server. As you can see the list:
Therefore, I also successfully installed "unixodbc-dev":
However, another error message appears when I tried to re-install the "RODBC" using "sudo R CMD INSTALL /home/mli/RODBC_1.3-13.tar.gz" in which it returns error "no ODBC driver manager found":
As the message indicates, the installation program can't locate my ODBC driver manager. So, I downloaded "vertica-client-7.2.3-0.x86_64.tar.gz" and unzipped it on the server:
So, now my question is how can I customize the "R CMD INSTALL" command say, using some parameter handles to direct the installation program to locate the driver manager? Or am I trying this in a right direction? Please let me know. Any help would be really appreciated!!! :)
ADDITION:
I have also tried it with JDBC in which the I successfully loaded the "RJDBC" package in R and used the JDBC driver from vertica-client-7.2.3-0.x86_64.tar.gz. Also, I have already had "rJava" installed. However, I have still got an error when I tried to make the connection. I am listing my result below:
I successfully installed the "RJDBC" with "$R CMD INSTALL RJDBC_0.2-5.tar.gz --library=/usr/local/lib/R/site-library/" and then I tried the following scripts in R. All the lines are successfully executed except on the line 16:
Based on the error message, I assumed the version of the JDBC driver that I was using is too new for the Vertica server. So, I was trying to use an older version JDBC driver instead, like the "vertica-jdk5-6.1.0-0.jar" which I have downloaded from this link:http://www.java2s.com/Code/Jar/v/Downloadverticajdk56100jar.htm
So, I moved the file "vertica-jdk5-6.1.0-0.jar" to my home directory on the server and then changed the JDBC driver path in the R script:
As you can see, it still returns error "FATAL: Unsupported frontend protocol 3.6: server supports 3.0 to 3.5". Am I doing it right? Or is there an issue with the new driver that I downloaded? How can make it works? Please, any help will be really appreciated! Thanks!!!
A few things:
First, just do sudo apt-get install r-cran-rodbc. The package was created (by yours truly) in no small part because dealing with unixODBC or iODBC is not fun. But even once you have that, you still need the ODBC driver for Linux from Vertica. And that part is filly.
Second, I just did something similar the other day but just used JDBC, which worked. You do of course need sudo apt-get install r-cran-rjava which has its own can of worms (but I already mentioned Java...) Still, maybe try that instead?
Third, you can cheat and just use psql pointed to the Vertica port (usually one above the PostgreSQL port).

How to define MySQL data source in TomEE?

Platform: TomEE Web profile 1.5.0.
I am trying to do a very basic thing, setup a data source for MySQL. I have read the official guide (http://openejb.apache.org/configuring-datasources.html). It asks us to enter a Resource element in openejb.xml. I can not find that file anywhere in tomee-webprofile-1.5.0. I read in other places that I could use tomee.xml for the same purpose. So, I added this to my conf/tomee.xml.
<Resource id="TestDS" type="DataSource">
JdbcDriver com.mysql.jdbc.Driver
JdbcUrl jdbc:mysql://localhost/test
UserName root
Password some_pass
</Resource>
I copied MySQL driver JAR to tomee/lib folder.
I wrote this code. Showing snippets here:
#Resource(name="TestDS")
DataSource ds;
Connection con = ds.getConnection();
PreparedStatement ps = con.prepareStatement("select * from UserProfile");
The prepareStatement() call is throwing this exception:
java.sql.SQLSyntaxErrorException: user lacks privilege or object not found: USERPROFILE
at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
Why is the system using hsqldb driver? In fact, no matter what is use as name for #Resource, I get the same exception.
What am I doing wrong? I am starting TomEE from Eclipse, if that makes any difference.
I have tracked down the root cause. The problem happens only when I start TomEE from Eclipse. If I start it from command line, my data source definition works just fine.
It appears that when I run TomEE from command line, it uses configuration files from /.metadata/.plugins/org.eclipse.wst.server.core/tmp0/conf. To change this, I had to take these steps in Eclipse:
Remove all deployed projects from the server.
Open server settings and from "Server Locations" choose "Use Tomcat installation". This section is greyed out if you have at least one project still deployed to the server. So, make sure you have done step #1.
Restart the server and redeploy the application. Now, my application is finding the data source.
normally the installation is explained here http://tomee.apache.org/tomee-and-eclipse.html
[I would make this a comment to the answer of RajV, but do not have enough reputation to do so.]
Platform: Tomee 1.6.0 Webprofile, eclipse-jee-kepler-SR2-linux-gtk-x86_64 and OpenJDK 1.7.0_51
After doing the steps in http://tomee.apache.org/tomee-and-eclipse.html (including "Workspace Metadata Installation") I got the same error "user lacks privilege or object not found".
My reaction was to:
$ ln -s [workspace_path]/Servers/tomee.xml \
[workspace_path]/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/conf/
As an advantage of this solution TomEE in eclipse is always using the current version of Workspace/Servers/tomee.xml without any further manual operation.
For me, better solution is to put tomee.xml file in your wpt server directory (/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/conf) and define your datasource there.

pg_bulkload error: "FATAL: unrecognized configuration parameter "wal_level""

I'm trying to give pg_bulkload a try.
When I try to use the postgresql executable it provides, I get the following error:
/usr/local/src/pg_bulkload-3.0.1/bin> ./postgresql start -D /pg_data
server starting
/usr/local/src/pg_bulkload-3.0.1/bin>
FATAL: unrecognized configuration parameter "wal_level"
Google turned up an exact match for this error when someone was using a 9.0 version of psql to run a script on an instance of Postgres 8.4. I don't see how that could be related to my case--I have two versions of Postgres, but I'm sure I'm pointing at the right directory... any thoughts are very welcome.
As far as I can tell from the docs, PostgreSQL 9.x supports a configuration parameter named "wal_level", but version 8.4 does not. The postgresql.conf file for my 9.0.something server has that parameter; the one for my 8.4 server does not.
PostgreSQL 9.x server configuration
PostgreSQL 8.4 server configuration
Your error message suggests you're running version 8.4, but it's reading the configuration file for a 9.x server. Check your postgresql.conf and installation process. I'm thinking pg_bulkload might have "helped" you in ways you didn't anticipate.
I think that it can be a bit tricky to install pg_bulkload to the right place if you have more than one version of PostgreSQL installed on your machine. My first problem was that pg_bulkload (version 3.1.6) could not find pg_port library. I copied the library libpgport.a (a static library) to /usr/local/lib where it was found, but this approach is not recommended, because this is only a quick fix that doesn't work at the end. So, very soon there was another problem: "undefined reference to `pstrdup'". I reckon that in pg_bulkload there should be a possibility of pointing out where PostgreSQL is installed. Well, I changed Makefile of pg_bulkload in pg_bulkload-3.1.6/bin, namely line with PG_LIBS: PG_LIBS = $(libpq) -L/current location of your PostgreSQL/PostgreSQL/pgsql/lib -lpgport -lpgcommon. -lpgport has to be added before -lpgcommon. Last but not least, to compile and install pg_bulkload you shoud modify your PATH: PATH=/current location of your PostgreSQL/PostgreSQL/pgsql/bin:$PATH make USE_PGXS=1 [install]; This makes sure that your pg_bulkload will be added to the correct version of PostgreSQL (in my case 9.3). Enjoy!