Intermittent 500 internal server error in images after adding isolation_level to a flask-sqlalchemy app on apache mod_wsgi server - flask-sqlalchemy

I am using Apache mod_wsgi in a flask-sqlalchemy, marshamllow application, connecting to a remote ms sql database using pyodbc, recently I was asked to add isolation_level 'SNAPSHOT' and I did that using apply_driver_hacks
class SQLiteAlchemy(SQLAlchemy):
def apply_driver_hacks(self, app, info, options):
options.update({
'isolation_level': 'SNAPSHOT',
})
super(SQLiteAlchemy, self).apply_driver_hacks(app, info, options)
the project is built to access image blob data from a ms sql server and display on a webpage, soon after adding the isolation level I see internal error generated for every few images, doing a ctrl+f5 displays the image but then there are other images not being displayed and this is in the error log
mod_wsgi (pid=10694): Exception occurred processing WSGI script
pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Transaction failed in database 'testdb' because the statement was run under snapshot isolation but the transaction did not start in snapshot isolation. You cannot change the isolation level of the transaction to snapshot after the transaction has started unless the transaction was originally started under snapshot isolation level. (3951) (SQLExecDirectW)")
edited to add code below:
how would I do that with flask-sqlalchmey when not using create-engine
my app.py file
app = Flask(__name__)
app.config.from_object('config.ProductionConfig')
db.init_app(app)
ma.init_app(app)
my model.py file
class SQLiteAlchemy(SQLAlchemy):
def apply_driver_hacks(self, app, info, options):
options.update({
'isolation_level': 'SNAPSHOT',
})
super(SQLiteAlchemy, self).apply_driver_hacks(app, info, options)
# To be initialized with the Flask app object in app.py.
db = SQLiteAlchemy()
ma = Marshmallow()

At Engine Level
If you were using the declaritive implementation you would have access to the create engine function (and the scoped session one).
But assuming you're using the Flask-SQLAlchemy implementation, this just calls sqlalchemy.create_engine under the hood (on this line).
Might be a hack for the latter, as there doesn't seem to be a way to pass engine related options in; they are defined specifically a few lines up at #558:
options = {'convert_unicode': True}
At Session Level
This looks like it could be slightly easier, because you can pass session options when you initialise SQLAlchemy: see this line. The create_scoped_session method expects a dictionary which can be passed to the __init__ method as session_options.
So when you initialise the library you could try something like:
db = SQLiteAlchemy(session_options={'isolation_level': 'SNAPSHOT'})

Related

Syntax for creating an event session in SQL Server

I am attempting to create an event session for my SQL database, as I keep receiving this error when I try and create the session using the UI under the Extended Events -> Sessions folder:
The target, "5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file", encountered a configuration error during initialization. Object cannot be added to the event session. (null) (Microsoft SQL Server, Error: 25602)
I am now attempting to create this using SQL commands, but am having trouble with the syntax as the official documentation syntax does not seem to be working. Does anyone know how I would fix this? Here is what I have so far:
CREATE EVENT SESSION CrystalLogsv3
ON SERVER
ADD EVENT sqlos.async_io_requested,
ADD TARGET package0.asynchronous_file_target
(SET filename='https://<name>logs.blob.core.windows.net/<name>/<filename>.xel',
credential = [https://<name>logs.blob.core.windows.net/<name>])
;

NullPointerException on loading data into Grakn

I have created a backup of Grakn with the exporter tool like this:
./grakn server export 'old_test' backup.grakn
$x isa export,
has status "completed",
has progress (100.0%),
has count (105 / 105);
I then wanted to import this into a new keyspace with
./grakn server import 'new_test' backup.grakn
But I got this error below:
An error has occurred during boot-up. Please run 'grakn server status' or check the logs located under the 'logs' directory.
io.grpc.StatusRuntimeException: INTERNAL: java.lang.NullPointerException
You need to import your schema into the new keyspace first, this error occurs because the server cannot find a schema label in your dataset. The steps for migrating schema are described in the docs: https://dev.grakn.ai/docs/management/migration-and-backup

Pentaho Data Integration: Error Handling

I'm building out an ETL process with Pentaho Data Integration (CE) and I'm trying to operationalize my Transformations and Jobs so that they'll be able to be monitored. Specifically, I want to be able to catch any errors and then send them to an error reporting service like Honeybadger or New Relic. I understand how to do row-level error reporting but I don't see a way to do job or transaction failure reporting.
Here is an example job.
The down path is where the transformation succeeds but has row errors. There we can just filter the results and log them.
The path to the right is the case where the transformation fails all-together (e.g. DB credentials are wrong). This is where I'm having trouble: I can't figure out how to get the error info to be sent.
How do I capture transformation failures to be logged?
You can not capture job-level errors details inside the job itself.
However there are other options for monitoring.
First option is using database logging for transformations or jobs (see the "Log" tab in the job/trans parameters dialog) - this way you always have up-to-date information about the execution status so you can, say, write a job that periodically scans the logging database and sends error reports wherever you need.
Meanwhile this option seems to be something pretty heavy-weight for development and support and not too flexible for further modifications. So in our company we ended up with monitoring on a job-execution level - i.e. when you run a job with kitchen.bat and it fails by any reason you get an "error" status of execution of the kitchen, so you can easily examine it and perform necessary actions with whenever tools you'd like - .bat commands, PowerShell or (in our case) Jenkins CI.
You could use the writeToLog("e", "Message") function in the Modified Java Script step.
Documentation:
// Writes a string to the defined Kettle Log.
//
// Usage:
// writeToLog(var);
// 1: String - The Message which should be written to
// the Kettle Debug Log
//
// writeToLog(var,var);
// 1: String - The Type of the Log
// d - Debug
// l - Detailed
// e - Error
// m - Minimal
// r - RowLevel
//
// 2: String - The Message which should be written to
// the Kettle Log

Cannot connect to DB2 from Groovy

I’m trying to open an SQL instance within a driver which uses the DB2Driver from IBM.
The interesting part is that when I include:
def DB2Driver = new DB2Driver()
That initializes just fine.
But when I do
Sql.newInstance(info.getHost(), info.getConnectionMetaData().getParameterValue('username'), info.getConnectionMetaData().getParameterValue('password'), info.getConnectionMetaData().getParameterValue('driverClass'))
Or
Sql.newInstance(info.getHost(), info.getConnectionMetaData().getParameterValue('username'), info.getConnectionMetaData().getParameterValue('password'), 'com.ibm.db2.jcc.DB2Driver')
It will fail to open a SQL connection, saying that a suitable driver isn't found. How can I get the connection to DB2 to open?
Assuming that you are using a groovy script with #Grab and #Grapes annotations, you probably need configure Grape for JDBC drivers:
Because of the way JDBC drivers are loaded, you’ll need to configure Grape to attach JDBC driver dependencies to the system class loader
In groovy.sql.Sql the JDBC DriverManager is used to get a connection: DriverManager.getConnection(). Since it needs the driver depencencies attached to the system class loader, you need to do it with #GrabConfig.
For example, this script
#Grapes([
#Grab(group='org.hsqldb', module='hsqldb', version='2.3.2'),
])
import groovy.sql.Sql
def sql = Sql.newInstance('jdbc:hsqldb:mem:testdb', 'sa', '', 'org.hsqldb.jdbcDriver')
println 'SQL connection ready'
fails with the exception java.sql.SQLException: No suitable driver found for jdbc:hsqldb:mem:testdb, but with
#Grapes([
#Grab(group='org.hsqldb', module='hsqldb', version='2.3.2'),
#GrabConfig(systemClassLoader=true)
])
it works perfectly.

hsqldb properties

I am using hsqldb which is having the following settings in the properties file (not set by me)
hsqldb.cache_size_scale=8
readonly=false
hsqldb.nio_data_file=true
hsqldb.cache_scale=14
version=1.8.0
hsqldb.default_table_type=memory
hsqldb.cache_file_scale=1
modified=yes
hsqldb.cache_version=1.7.0
hsqldb.original_version=1.8.0
hsqldb.compatible_version=1.8.0
The db started giving errors in logs
java.sql.SQLException: S1000 General error java.util. NoSuchElementException
Some searching on google pointed me that this is because the limit of the .data file has been reached. The size of the .data file is around 0.7gb.
If i increase the cache_file_size , will the above error disappear
hsqldb.default_table_type=memory
hsqldb.cache_file_scale=1
If hsqldb.cache_file_scale=3.
Does this mean that database is in memory and will require 3GB. If memory is an issue how can be reduced ?
The current setting allows up to 2GB in the data file.
I suggest you perform a SHUTDOWN SCRIPT to clear up any problems. If you have further problems, contact the HSQLDB project.