Moqui connect to Master and Replica database - moqui

I am new in Moqui and want to connect to two mysql databases, one to write on(master) and one to read from (replica), how to do this?

I found a solution for this, i used
Connection con = ec.entity.getConnection("replica-group-name")
ps = con.prepareStatement("SElECT statement")
rs = ps.executeQuery()
this code is doing 50% of what i want, i am now trying to make the entity-find to use the replica instead of transactional datasource

Related

PostgreSQL setup chunk (RMarkdown)

I have been looking around for a while, for example in The RMarkdwon Definitive Guide or elsewhere, but found no satisfactory and clear description nor very clear example to connect to a PostgreSQL database. The way the information in that definitive guide reads to me seems kind of meaningless to me for some reason, such that I do not understand it.
The main piece of information I found is this (with {r setup} above it according to the information):
library(DBI)
db = dbConnect(RSQLite::SQLite(), dbname = "sql.sqlite")
knitr::opts_chunk$set(connection = "db")
The library(DBI) part of course I get, but not the rest, except that knitr is a pack used for certain purposes, and some things here and there). Basically, I do not know how to set this up for PostgreSQL.
So what would be a good example for a first PostgreSQL setup chunk?
(As a sidenote, because I thought I wasted too much time, I just used RPostgres whenever I wanted. But because I thought that using SQL chunks would have greater advantages, I checked again. Maybe, in the end, I would be better off without the direct SQL chunks, but if I get to understand it sufficiently, maybe that would pay off, for example in having to type less or in having a nicer looking document or so.)
The dbConnect line is about connecting to your database; in the example, it's an in-memory SQLite database, but you'll need to modify it to connect to your PostgreSQL instance.
There's an example at Read/write Postgres large objects with DBI & RPostgres,
con <- dbConnect(
RPostgres::Postgres(),
dbname = "postgres",
host = "localhost",
port = 5433,
user = "postgres",
password = "mysecretpassword"
)
(change the details to match your database)
The knitr::opts_chunk part is setting an option to knitr, so that you don't need to specify connection = "db" in every code chunk.

Receiving conn busy on different queries

I have a general question about golang and performing queries. I am using gin-gonic as my http framework and pgx driver to perform different queries. I'm running into a problem that some queries return conn busy and need to know how to solve this problem for future references. Note some of my queries use pgx.Conn and others pgx.Pool I have also configured my pgx.Pool to have a max 10 connection pools.
An example query I have is
SELECT user_id,first_name,last_name,email,users.username,dob,country,is_verified, bio,"+
"profile_json, tier, casual_games, stream_time, profile_image,"+
"is_streaming, users.created_at FROM users INNER JOIN profiles ON profiles.username = users.username WHERE users.user_id = $1
Note some of my queries use pgx.Conn and others pgx.Pool
https://pkg.go.dev/github.com/jackc/pgx#Conn
Conn is a PostgreSQL connection handle. It is not safe for concurrent usage. Use ConnPool to manage access to multiple database connections from multiple goroutines.
Seems like your code must be breaking due to use of non threadsafe Conn

Copying Tables contents of one Database to another from a vb.NET app using OracleDataAdapter.InsertCommand

So the rundown of what I'm trying to achieve is essentially an update app that will pull data from our most recent production databases and copy it's contents to the Devl or QA databases. I plan to limit what is chosen by a number of rows so to increase the consistency that this update can happen by allowing us to only get what we need, as for right now these databases rarely get updated due to the vast size of the copy job. The actual pl/sql commands will be stored in a table that I plan to reference for each table, but I'm currently stuck on the best and easiest way to transfer these between these two databases while still getting my commandText to be used. I figured the best way would be to use the OracleDataAdapter.InsertCommand command, but very few examples can be found as to what I'm doing, any suggestions aside from the .InsertCommand are welcome as I'm still getting my footing with Oracle all together.
Dim da As OracleDataAdapter = New OracleDataAdapter
Dim cmd As New OracleCommand()
GenericOraLoginProvider.Connect()
' Create the SelectCommand.
cmd = New OracleCommand("SELECT * FROM TAT_TESTTABLE ", GenericOraLoginProvider.Connection())
da.SelectCommand = cmd
' Create the InsertCommand.
cmd = New OracleCommand("INSERT INTO TAT_TEMP_TESTTABLE", GenericOraLoginProvider.Connection())
da.InsertCommand = cmd
Question: This is an example of what I've been trying as a first step with the Insert command, TAT_TESTTABLE and TAT_TEMP_TESTTABLE are just junk tables that I loaded with data to see if I could move things the way I wanted this way.
As why I'm asking this question the data isn't transferring over, while these tables are on the same database in the future they will not be along with the change to the previously mentioned pl/sql commands. Thankyou for any help, or words of wisdom you can provide, and sorry for the wall of text I tried to keep it specific.
lookup sqlbulkcopy. I use this to transfer data between all kinds of vendor databases.. https://msdn.microsoft.com/en-us/library/ex21zs8x(v=vs.110).aspx

How to manage a HSQLDB in JBoss AS7.1.x

It's my first try to use EJB3.1 entities with a JBoss AS7.1.1 server. I figured out that the HSQLDB is no longer included in the version 7 of JBoss. First I added the hsqldb.jar through the Administration Console --> Deployments --> Manage Deployments. After that I added a new Data Source through Profil --> Connector -> Datasources
My first example code works fine:
[...]
InitialContext ic = new InitialContext();
DataSource ds = (DataSource) ic.lookup("java:/DefaultDS");
con = ds.getConnection();
stmt = con.createStatement();
stmt.execute("drop table timers;");
stmt.execute("Create table timers(id char(10));");
stmt.execute("INSERT INTO timers (id) VALUES (20)");
stmt.execute("INSERT INTO timers (id) VALUES (21)");
ResultSet number = stmt.executeQuery("select * from timers");
[...]
My question is how I can manage (=create, drop and update new tables) the DB which is created in the folder jobss\standalone\data\hypersonic. At the moment I have no "overview" which tables are created, the structure of them and the data.
Does someone have a tip or a tutorial for me which deal with the problem? Thank you.
In my case it was easier as I thought at the beginning. I needed to manage the DB which is stored in the AS 7.1.x Server. Because of the missing JMX-Console I wasn't able to get access to that DB over the administration. I've added the Datasource and the Manage Deployments as described in the first post.
To manage such a DB you can use the 'HSQLDB Database Manager', select 'HSQL Database Engine Standalone' as the type and 'jdbc:hsqldb:file:«MY_PATH_TO_DB_FOLDER_IN_JBOSS»' for the URL. Now I can manage the DB outside the server and EJB environment.
Thanks you fredt for your help and your inspirations.
You always query INFORMATION_SCHEMA tables to find out which tables already exist. Once you know a table exists, you can execute a query that you know does not take a long time, in order to find out about the state of the data.
For example, the first query will show you if there is a 'TIMERS' table, and the second query will show if there is any data in the table:
SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TIMERS'
SELECT * FROM TIMERS LIMIT 1
Try always to define a primary key on each table in order to speed up queries.
To access a database embedded in an app server, you should add the org.hsqldb.server.Servlet class as a servlet then connect to the app server using a database management tool and a URL such as jdbc:hsqldb:http:<your servlet url here>

How does "set rs = conn.execute(sql)" actually work?

I am at the moment trying to improve the performance in my ASP Classic application, and I am at the point of improving SQL transactions.
I have been reading a little from http://www.somacon.com/p236.php
So, my question is:
How does the line set rs = conn.execute(sql) actually work?
I believe that the line defines a variable named rs and binding ALL the data collected from the database through the SQL sentence (fx select * from users).
Then afterwards I can actually throw the database-connection to hell and redefine my sql variable if I please, is this true?
If that is true, will I then get the best performance by executing my code like this:
set conn = server.createobject("adodb.connection")
dsn = "Provider = sqloledb; Data Source = XXX; Initial Catalog = XXX; User Id = XXX; Password = XXX"
conn.open dsn
sql = "select id, name from users"
set rs = conn.execute(sql)
conn.close
-- Do whatever i want with the variable rs
conn.open dsn
sql = "select id from logins"
set rs = conn.execute(sql)
conn.close
-- Do whatever i want with the variable rs
conn.open dsn
sql = "select id, headline from articles"
set rs = conn.execute(sql)
conn.close
-- Do whatever i want with the variable rs
set conn = nothing
In this example i open and close the connection each time i do a sql transaction.
Is this a good idea?
Is this a good idea?
No but not for the reasons indicated by Luke. The reality is that ADODB will cache connections anyway so opening and closing connections isn't all that expensive after all. However the question proceeds from the mis-information you appear to have over the behaviour of a recordset...
From you comment on Lukes answer:-
But it is correct, that it stores all the data in the variable when executed?
Not unless you have carefully configured the recordset return to be a static client-side cursor and even then you would have to ensure that the cursor is completely filled. Only then could you disconnect the recordset from the connection and yet continue to use the data in the recordset.
By default a SQL server connection will deliver a simple "fire-hose" rowset (this isn't even really a cursor) the data is delivered raw from the query, only a small amount of buffering occurs of incoming records and you can't navigate backwards.
The most efficient way to minimise the amount of time you need the connection is to use the ADODB Recordset's GetRows method. This will suck all the rows into a 2-dimensional array of variants. Having got this array you can dispense with the recordset and connection.
Much is still made of minimising the number of connections maintained on a server but in reality on modern hardware that is not a real issue of the majority of apps. The real problem is the amount of time an on going query is maintaining locks in the DB. By consuming and closing a recordset quickly you minimise the time locks are held.
A word of caution though. The tradeoff is an increased demand for memory on the web server. You need to be careful you aren't just shifting one bottleneck to another. That said there are plenty of things you can do about that. Use a 64Bit O/S and stuff plenty of memory in it or scale out the web servers into a farm.
Nope, opening and closing connections is costly. Open it, reuse the recordset like you are, then close it.