sequelize-typescript audit log for create update delete - sequelize-typescript

I would like to log all actions(create, update, delete), for the update preferred to log the old value and new value that updated through the sequelize-typescript. But I have no idea how to achieve this. Anyone can help?

If you are running the CLI then you can add in the environment variable DEBUG=sequelize and you will get detailed SQL statements.
For using the library in Node, when you create the DB instance you can pass in a verbose flag that would log out that information. You won't get the before and after values but you will get the SQL commands being sent to the database.
const myVerboseDb = new Sequelize(config.db, config.username, config.password, {
host: "localhost",
port: "3456",
dialect: "postgres",
logging: true // This line will tell Sequelize to log all SQL commands
})
https://github.com/sequelize/sequelize/issues/610#issuecomment-69675229
https://sequelize.org/master/manual/getting-started.html#logging

Related

How to Impersonate Impala queries on Superset

I'm setting up Superset (0.36.0) in production Mode (with Gunicorn), and I would like to set up impersonate while running Impala queries on my Kerberized Cluster, to each user of Superset have privilegies on tables/databases like he has on Hive/Hue/HDFS. I've tried to set "Impersonate the logged on user" to true in my database config, but it's not changing the user that is running the query, it's always using the celery-worker user.
My database config is:
Extras:
{
"metadata_params": {},
"engine_params": {
"connect_args": {
"port": 21050,
"use_ssl": "True",
"ca_cert": "/path/to/my/cert.pem",
"auth_mechanism": "GSSAPI"
}
},
"metadata_cache_timeout": {},
"schemas_allowed_for_csv_upload": []
}
My query resume in Cloudera Manager (5.13):
How can I enable Impersonate correctly in my Superset? Maybe there is something related to the config impala.doas.user in HiveServer2 connection, but I don't know how to config this properly.
I faced the same issue and I was to get it working for hive. The issue seems to be in the file hive.py located under the path ${YOUR_INSTALLATION_PATH}/superset/db_engine_specs
If you just comment out line 435, it should work. Unfortunately, I don't understand python well enough to tell you the exact reason.
I found this by brute force by running the source code and putting log statements
if (
backend_name == "hive"
# comment this line
# and "auth" in url.query.keys()
and impersonate_user is True
and username is not None
):
configuration["hive.server2.proxy.user"] = username
return configuration
Alternatively, if you do not want to modify the source code, you can modify the URL while creating the data source in superset as :
hive://<url>/default?auth=NONE ( when there is no security )
hive://<url>/default?auth=KERBEROS
hive://<url>/default?auth=LDAP

How to create script using Liquibase without providing the db details like url, user-name, password and driver?

I created the ddl scripts using liquibase by providing the input data base change log.
The code looks like this
private void toSQL(DatabaseChangeLog d)
throws DatabaseException, LiquibaseException, UnsupportedEncodingException, IOException {
FileSystemResourceAccessor fsOpener = new FileSystemResourceAccessor();
CommandLineResourceAccessor clOpener = new CommandLineResourceAccessor(this.getClass().getClassLoader());
CompositeResourceAccessor fileOpener = new CompositeResourceAccessor(new ResourceAccessor[] { fsOpener, clOpener });
Database database = CommandLineUtils.createDatabaseObject(fileOpener, this.url, this.username, this.password, this.driver,
this.defaultCatalogName, this.defaultSchemaName, Boolean.parseBoolean(this.outputDefaultCatalog),
Boolean.parseBoolean(this.outputDefaultSchema), this.databaseClass,
this.driverPropertiesFile, this.propertyProviderClass, this.liquibaseCatalogName,
this.liquibaseSchemaName, this.databaseChangeLogTableName, this.databaseChangeLogLockTableName);
Liquibase liquibase=new Liquibase(d, null, database);
liquibase.update(new Contexts(this.contexts), new LabelExpression(this.labels), getOutputWriter());
}
and my liquibase.properties goes like this
url=jdbc\:sqlserver\://server\:1433;databaseName\=test
username=test
password=test#123
driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
referenceUrl=hibernate:spring:br.com.company.vacation.domain?dialect=org.hibernate.dialect.SQLServer2008Dialect
As you can see, Liquibase is expecting a lot of db parameters such as url,username,password,driver, which I will not be able to provide.
How can I achieve this without providing any of the parameters. Is it possible?
No, it is not possible. If you want liquibase to interact with a database, you have to tell it how to connect to that database.
I investigated a little about the liquibase operation in offline mode. It goes like this.
Running in offline mode only supports updateSql, rollbackSQL, tag, and tagExists. It does not support direct update, diff, or preconditions as there is nothing to actually update or state to check.
An offline database is “connected” to using a url syntax of offline:DATABASE_TYPE?param1=value1&aparam2=value2.
The following code will suffice
this.url=offline:postgres?param1=value1&aparam2=value2;
this.driver=null;
this.username=null;
this.password=null;
Hence not providing the db details. Offline url can be made up from the store type.

How get raw SQL from sequelize migraions

I have a bunch of Sequilize migration files. All looks like:
module.exports = {
up: //up migration
down: //down migration,
};
Is there a programmatically way to get SQL queries from that files? It will be ok to use Node ecosystem. The only requirement do that automatically.
Why do I want do this?
I wan't to create SQL migrations from javascript files to put them into entrypoint of my Postgres base image for local development. And I don't want to put Node.js with Sequelize into my image which depends only on Postgres official base image from Docker Hub.
If you already have a database with the right schema, all you need is the schema. You can use pg_dump command to get the schema
pg_dump.exe -U username -d databasename -s schemaname> myschema.sql
You can now import this schema
psql -d database_name -h localhost -U postgres < myschema.sql
I know you're asking how to get this programatically but just exposing the raw SQL is valuable. I was able to get the raw sql (sorting this out led me to this question) by adding the key logging to the options object.
This is my migration:
await queryInterface.addIndex(
constants.EVENTS_TABLE_NAME,
['created_at'],
{ using: 'brin', concurrently: true, logging: console.log }
);
and the output from the migration:
== 20220311183756-create-brin-index-on-created-at: migrating =======
Executing (default): CREATE INDEX CONCURRENTLY "events_created_at" ON "events" USING brin ("created_at")
== 20220311183756-create-brin-index-on-created-at: migrated (0.019s)
Here is an example from their docs:
await sequelize.query('SELECT 1', {
// A function (or false) for logging your queries
// Will get called for every SQL query that gets sent
// to the server.
logging: console.log,
// If plain is true, then sequelize will only return the first
// record of the result set. In case of false it will return all records.
plain: false,
// Set this to true if you don't have a model definition for your query.
raw: false,
// The type of query you are executing. The query type affects how results are formatted before they are passed back.
type: QueryTypes.SELECT
});

Django 1.11 - how to print SQL statements with substituted variables

This question is different than log all sql queries
I tried logging configurations from the answers above they are not working as I would like them to work, so please read on.
What I want to do, is to make Django (1.11.x) debug server log SQL queries in such a way, that I can redirect them to *.sql file and immediately execute.
For this I need a SQL statements where all the variables are already substituted, so I DON'T want this:
WHERE some_column in (:arg1, :arg2, ...)
but I want this instead:
WHERE some_column in ('actual_value_1', 'actual_value2', ...)
Can you please help me figure out how to do this?
Please note, that I don't want the SQL query to be printed in the browser (in some debug app like django_debug_toolbar) but printed to the console.
Please note, that I don't want to type Django QuerySet queries in console - I want to type URL in browser, so make an actual HTTP request to Django debug server and see it printing a SQL query in such a way I can execute it later using SQL Plus or any other console tools.
I think the tool I just made would be perfect for you.
https://pypi.python.org/pypi/django-print-sql
https://github.com/rabbit-aaron/django-print-sql
Install like this:
pip install --upgrade django-print-sql
Use like this in your view function:
from django_print_sql import print_sql
Class MyView(View):
def get(self, request):
with print_sql():
User.objects.get(id=request.user.id)
# and more....
This is a bit of a hack, because we have to strip the default django.db.backends log messages of the time taken and the args. After making these changes you should have a file of pure SQL that you are free to run as you wish...
First you want to set up your logging settings to reference our new log handler
Settings:
LOGGING = {
'version': 1,
'handlers': {
'sql': {
'level': 'DEBUG',
'class': 'my_project.loggers.DjangoSQLFileHandler',
'filename': '/path/to/sql.log'
},
},
'loggers': {
'django.db.backends': {
'handlers': ['sql'],
'level': 'DEBUG'
}
}
}
Then you need to define your new handler and strip out the unwanted text from the message.
my_project.loggers.py:
from logging import FileHandler
class DjangoSQLFileHandler(FileHandler):
def getMessage(self):
msg = super(DjangoSQLFileHandler, self).getMessage()
return record.msg.split(') ', 1)[1].split(';', 1)[0] + ';'

Cannot create an index, i.e. /{db}/_index not working on 2.0.0

I spent hours to figure out why I cannot use Mango Query features. In Fauxton I can neither add Mango Indexes, neither run a Mango query. For instance, in NodeJS:
var PouchDB = require('pouchdb');
PouchDB.plugin(require('pouchdb-find'));
var db = new PouchDB('http://localhost:5986/books');
db.createIndex({ index: { fields: ['nom'] } })
.then(console.log)
.catch(console.log);
=> { error: 'bad_request',
reason: 'Referer header required.',
name: 'bad_request',
status: 400,
message: 'Referer header required.' }
Any clue welcome! Thanks
It looks like this plugin can only perform the search operation on a local PouchDB database, and not translate it to a remote CouchDB query.
You probably want to set up the local db like this:
var db = new PouchDB('books') (instead of the url) and then setup replication for your documents as described here in the PouchDB docs. Your index will not be synced however.
An advantage caused by this is that you can always query your database even if the CouchDB server goes down.