RabbitMQ Automatic Consuming - rabbitmq

I've write python file to write somethin in database.
This task is adding in RabbitMQ queue.
How can DB automatically consume task From Queue ??
import MySQLdb
from celery import Celery
import pika
app2 = Celery('task2', broker='amqp://guest#localhost//')
#app2.task(queue='Test')
def update_db():
# Open database connection
db = MySQLdb.connect("localhost","root","root","test" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# Prepare SQL query to DELETE required records
sql = "insert into tt values(1,'james')"
#print sql
# Execute the SQL command
cursor.execute(sql)
# Commit your changes in the database
db.commit()
# disconnect from server
db.close()

You have to run a celery worker which will listen to the queue and execute your code
http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#running-the-celery-worker-server

Related

Scheduling Query by using Script

I'd like to ask if it's possible or not to rum query scheduling by using script?
As for creating table, we could use script
CREATE TABLE dataset.xxx AS
...
Is there any way to do this but to CREATE A SCHEDULER instead of clicking the 'Schedule Query' Button?
As per documentation, in order to schedule a query , you can use one of the following methods:
BigQuery console and click "Schedule Query" as you mentioned in your question.
bq command
Python API
I will share same examples with you, first using bq command. From the Cloud Shell environment you can execute the following command:
bq query \
--use_legacy_sql=false \
--destination_table=mydataset.mytable \
--display_name='My Scheduled Query' \
--replace=true \
'SELECT
1
FROM
mydataset.test'
In addition, using bq command you can also use other flags, described here.
Second, using the Python API, you can configure your schedule query using the DataTransferServiceClient, which allows you to pass all the query configuration through a json dictionary, such as this example in the documentation and below:
from google.cloud import bigquery_datatransfer_v1
import google.protobuf.json_format
client = bigquery_datatransfer_v1.DataTransferServiceClient()
# TODO(developer): Set the project_id to the project that contains the
# destination dataset.
# project_id = "your-project-id"
# TODO(developer): Set the destination dataset. The authorized user must
# have owner permissions on the dataset.
# dataset_id = "your_dataset_id"
# TODO(developer): The first time you run this sample, set the
# authorization code to a value from the URL:
# https://www.gstatic.com/bigquerydatatransfer/oauthz/auth?client_id=433065040935-hav5fqnc9p9cht3rqneus9115ias2kn1.apps.googleusercontent.com&scope=https://www.googleapis.com/auth/bigquery%20https://www.googleapis.com/auth/drive&redirect_uri=urn:ietf:wg:oauth:2.0:oob
#
# authorization_code = "_4/ABCD-EFGHIJKLMNOP-QRSTUVWXYZ"
#
# You can use an empty string for authorization_code in subsequent runs of
# this code sample with the same credentials.
#
# authorization_code = ""
# Use standard SQL syntax for the query.
query_string = """
SELECT
CURRENT_TIMESTAMP() as current_time,
#run_time as intended_run_time,
#run_date as intended_run_date,
17 as some_integer
"""
parent = client.project_path(project_id)
transfer_config = google.protobuf.json_format.ParseDict(
{
"destination_dataset_id": dataset_id,
"display_name": "Your Scheduled Query Name",
"data_source_id": "scheduled_query",
"params": {
"query": query_string,
"destination_table_name_template": "your_table_{run_date}",
"write_disposition": "WRITE_TRUNCATE",
"partitioning_field": "",
},
"schedule": "every 24 hours",
},
bigquery_datatransfer_v1.types.TransferConfig(),
)
response = client.create_transfer_config(
parent, transfer_config, authorization_code=authorization_code
)
print("Created scheduled query '{}'".format(response.name))

How to reuse ssh connection during entire pytest suite

I have an e2e test suite which loads some fixtures in in the database by calling a script on the server side using an SSH connection.
I want to keep the fixtures that i load local to test that needs them. I would write a test something like
class ExampleTests(BaseTest):
def test_A(self):
load_fixture('TEST_A')
do_actual_test()
def test_B(self):
load_fixture('TEST_B')
do_actual_test()
In my load_fixture method the SSH connection is made and the script is run on the server side.
If i run the entire test suite it will create a new SSH connection each time I call the load_fixture method. Conceptually this is what i want. I don't want to load all my fixtures for all my tests before any test runs. I want to be able to run fixtures when i need them. e.g.
class ExampleTests(BaseTest):
def test_B(self):
user_a = load_user_fixture('username-A')
do_some_testing_on_user_a()
load_post_fixture_for_user(user_a, subject='subject-a')
do_tests_using_post()
In this test it would also create 2 ssh connections.
So what i want to have happen is that the first time i call the load_fixture method it creates the connection but keeps it around for the duration of the test suite. Or i create a connection before any test runs and then use that connection whenever i load a fixture.
Of course it should keep working when i run the tests over multiple core.
My load_fixture function looks something like:
def load_fixtures(connection_info, command, fixtures):
out, err, exit_code = run_remote_fixture_script(connection_info, command, fixtures)
def run_remote_fixture_script(connection_info, command_name, *args):
ssh = SSHClient()
ssh.connect(...)
command = '''
./load_fixture_script {test_target} {command} {args};
'''.format(
test_target=connection_info.target,
command=command_name,
args=''.join([" '{}'".format(arg) for arg in args])
)
stdin, stdout, stderr = ssh.exec_command(command)
exit_code = stdout.channel.recv_exit_status()
ssh.close()
return stdout, stderr, exit_code
I also want to reopen the connection automatically if for any reason the connection closes.
You need to use
#pytest.fixture(scope="module")
Keeping the scope as module will keep it for whole test suite.
and finalizer method within your fixture
`def run_remote_fixture_script(connection_info, command_name, *args):
ssh = SSHClient()
ssh.connect(...)
command = '''
./load_fixture_script {test_target} {command} {args};
'''.format(
test_target=connection_info.target,
command=command_name,
args=''.join([" '{}'".format(arg) for arg in args])
)
stdin, stdout, stderr = ssh.exec_command(command)
exit_code = stdout.channel.recv_exit_status()
def fin():
print ("teardown ssh")
ssh.close()
request.addfinalizer(fin)
return stdout, stderr, exit_code
Please Excuse the formating of code. You could see this link for more details
And you would call this fixture as
def test_function(run_remote_fixture_script)
output = run_remote_fixture_script
Hope this helps .
Finalizer method will be called end of test suite , if scope is method it will be called after method

how to invoke SQL inside perl script

I am trying to connect with database and perform some SQL queries by using this code, but every time it hangs.
my $connect_str = `/osp/local/etc/.oralgn $srv_name PSMF`;
my $sqlFile = "/osp/local/home/linus/amit/mytest.sql";
my ($abc, $cde)= split (/\#/ , $connect_str );
print "$abc";
$ORACLE_SID=SDDG00;
`export $ORACLE_SID`;
#chomp($abc);
#$abc=~ s/\s+$//;
`sqlplus $abc`;
open (SQL, "$sqlFile");
while (my $sqlStatement = <SQL>) {
$sth = dbi->prepare($sqlStatement)
or die (qq(Can't prepare $sqlStatement));
$sth->execute()
or die qq(Can't execute $sqlStatement);
}
How do I invoke a SQL command inside Perl?
Reading the documentation for the DBI module would be a good start.
Your problem seems to be this line.
$sth = dbi->prepare($sqlStatement)
You're trying to call the prepare method on the class "dbi". But you don't have a class called "dbi" in your program (or, at least, I can't see one in the code you've shown us).
To use a database from Perl you need to do these things:
1/ Load the DBI module (note, "DBI", not "dbi" - Perl is case sensitive).
use DBI;
2/ Connect to the database and get a database handle (Read the DBD::Oracle documentation for more details on the arguments to the connect() method).
my $dbh = DBI->connect('dbi:Oracle:dbname', $user, $password);
3/ You can then use this database handle to prepare SQL statements.
my $sth = $dbh->prepare($sqlStatement);

How do you stop a user-instance of Sql Server? (Sql Express user instance database files locked, even after stopping Sql Express service)

When using SQL Server Express 2005's User Instance feature with a connection string like this:
<add name="Default" connectionString="Data Source=.\SQLExpress;
AttachDbFilename=C:\My App\Data\MyApp.mdf;
Initial Catalog=MyApp;
User Instance=True;
MultipleActiveResultSets=true;
Trusted_Connection=Yes;" />
We find that we can't copy the database files MyApp.mdf and MyApp_Log.ldf (because they're locked) even after stopping the SqlExpress service, and have to resort to setting the SqlExpress service from automatic to manual startup mode, and then restarting the machine, before we can then copy the files.
It was my understanding that stopping the SqlExpress service should stop all the user instances as well, which should release the locks on those files. But this does not seem to be the case - could anyone shed some light on how to stop a user instance, such that it's database files are no longer locked?
Update
OK, I stopped being lazy and fired up Process Explorer. Lock was held by sqlserver.exe - but there are two instances of sql server:
sqlserver.exe PID: 4680 User Name: DefaultAppPool
sqlserver.exe PID: 4644 User Name: NETWORK SERVICE
The file is open by the sqlserver.exe instance with the PID: 4680
Stopping the "SQL Server (SQLEXPRESS)" service, killed off the process with PID: 4644, but left PID: 4680 alone.
Seeing as the owner of the remaining process was DefaultAppPool, next thing I tried was stopping IIS (this database is being used from an ASP.Net application). Unfortunately this didn't kill the process off either.
Manually killing off the remaining sql server process does remove the open file handle on the database files, allowing them to be copied/moved.
Unfortunately I wish to copy/restore those files in some pre/post install tasks of a WiX installer - as such I was hoping there might be a way to achieve this by stopping a windows service, rather then having to shell out to kill all instances of sqlserver.exe as that poses some problems:
Killing all the sqlserver.exe instances may have undesirable consequencies for users with other Sql Server instances on their machines.
I can't restart those instances easily.
Introduces additional complexities into the installer.
Does anyone have any further thoughts on how to shutdown instances of sql server associated with a specific user instance?
Use "SQL Server Express Utility" (SSEUtil.exe) or the command to detach the database used by SSEUtil.
SQL Server Express Utility,
SSEUtil is a tool that lets you easily interact with SQL Server,
http://www.microsoft.com/downloads/details.aspx?FamilyID=fa87e828-173f-472e-a85c-27ed01cf6b02&DisplayLang=en
Also, the default timeout to stop the service after the last connection is closed is one hour. On your development box, you may want to change this to five minutes (the minimum allowed).
In addition, you may have an open connection through Visual Studio's Server Explorer Data Connections, so be sure to disconnect from any database there.
H:\Tools\SQL Server Express Utility>sseutil -l
1. master
2. tempdb
3. model
4. msdb
5. C:\DEV_\APP\VISUAL STUDIO 2008\PROJECTS\MISSICO.LIBRARY.1\CLIENTS\CORE.DATA.C
LIENT\BIN\DEBUG\CORE.DATA.CLIENT.MDF
H:\Tools\SQL Server Express Utility>sseutil -d C:\DEV*
Failed to detach 'C:\DEV_\APP\VISUAL STUDIO 2008\PROJECTS\MISSICO.LIBRARY.1\CLIE
NTS\CORE.DATA.CLIENT\BIN\DEBUG\CORE.DATA.CLIENT.MDF'
H:\Tools\SQL Server Express Utility>sseutil -l
1. master
2. tempdb
3. model
4. msdb
H:\Tools\SQL Server Express Utility>
Using .NET Refector the following command is used to detach the database.
string.Format("USE master\nIF EXISTS (SELECT * FROM sysdatabases WHERE name = N'{0}')\nBEGIN\n\tALTER DATABASE [{1}] SET OFFLINE WITH ROLLBACK IMMEDIATE\n\tEXEC sp_detach_db [{1}]\nEND", dbName, str);
I have been using the following helper method to detach MDF files attached to SQL Server in unit tests (so that SQ Server releases locks on MDF and LDF files and the unit test can clean up after itself)...
private static void DetachDatabase(DbProviderFactory dbProviderFactory, string connectionString)
{
using (var connection = dbProviderFactory.CreateConnection())
{
if (connection is SqlConnection)
{
SqlConnection.ClearAllPools();
// convert the connection string (to connect to 'master' db), extract original database name
var sb = dbProviderFactory.CreateConnectionStringBuilder();
sb.ConnectionString = connectionString;
sb.Remove("AttachDBFilename");
var databaseName = sb["database"].ToString();
sb["database"] = "master";
connectionString = sb.ToString();
// detach the original database now
connection.ConnectionString = connectionString;
connection.Open();
using (var cmd = connection.CreateCommand())
{
cmd.CommandText = "sp_detach_db";
cmd.CommandType = CommandType.StoredProcedure;
var p = cmd.CreateParameter();
p.ParameterName = "#dbname";
p.DbType = DbType.String;
p.Value = databaseName;
cmd.Parameters.Add(p);
p = cmd.CreateParameter();
p.ParameterName = "#skipchecks";
p.DbType = DbType.String;
p.Value = "true";
cmd.Parameters.Add(p);
p = cmd.CreateParameter();
p.ParameterName = "#keepfulltextindexfile";
p.DbType = DbType.String;
p.Value = "false";
cmd.Parameters.Add(p);
cmd.ExecuteNonQuery();
}
}
}
}
Notes:
SqlConnection.ClearAllPools() was very helpful in eliminating "stealth" connections (when a connection is pooled, it will stay active even though you 'Close()' it; by explicitely clearing pool connections you don't have to worry about setting pooling flag to false in all connection strings).
The "magic ingredient" is call to the system stored procedure sp_detach_db (Transact-SQL).
My connection strings included "AttachDBFilename" but didn't include "User Instance=True", so this solution might not apply to your scenario
I can't comment yet because I don't have high enough rep yet. Can someone move this info to the other answer so we don't have a dupe?
I just used this post to solve my WIX uninstall problem. I used this line from AMissico's answer.
string.Format("USE master\nIF EXISTS (SELECT * FROM sysdatabases WHERE name = N'{0}')\nBEGIN\n\tALTER DATABASE [{1}] SET OFFLINE WITH ROLLBACK IMMEDIATE\n\tEXEC sp_detach_db [{1}]\nEND", dbName, str);
Worked pretty well when using WIX, only I had to add one thing to make it work for me.
I had took out the sp_detach_db and then brought the db back online. If you don't, WIX will leave the mdf files around after the uninstall. Once I brought the db back online WIX would properly delete the mdf files.
Here is my modified line.
string.Format( "USE master\nIF EXISTS (SELECT * FROM sysdatabases WHERE name = N'{0}')\nBEGIN\n\tALTER DATABASE [{0}] SET OFFLINE WITH ROLLBACK IMMEDIATE\n\tALTER DATABASE [{0}] SET ONLINE\nEND", dbName );
This may not be what you are looking for, but the free tool Unlocker has a command line interface that could be run from WIX. (I have used unlocker for a while and have found it stable and very good at what it does best, unlocking files.)
Unlocker can unlock and move/delete most any file.
The downside to this is the apps that need a lock on the file will no longer have it. (But sometimes still work just fine.) Note that this does not kill the process that has the lock. It just removes it's lock. (It may be that restarting the sql services that you are stopping will be enough for it to re-lock and/or work correctly.)
You can get Unlocker from here: http://www.emptyloop.com/unlocker/
To see the command line options run unlocker -H
Here they are for convenience:
Unlocker 1.8.8
Command line usage:
Unlocker.exe Object [Option]
Object:
Complete path including drive to a file or folder
Options:
/H or -H or /? or -?: Display command line usage
/S or -S: Unlock object without showing the GUI
/L or -L: Object is a text file containing the list of files to unlock
/LU or -LU: Similar to /L with a unicode list of files to unlock
/O or -O: Outputs Unlocker-Log.txt log file in Unlocker directory
/D or -D: Delete file
/R Object2 or -R Object2: Rename file, if /L or /LU is set object2 points to a text file containing the new name of files
/M Object2 or -M Object2: Move file, if /L or /LU is set object2 points a text file containing the new location of files
Assuming your goal was to replace C:\My App\Data\MyApp.mdf with a file from your installer, you would want something like unlocker C:\My App\Data\MyApp.mdf -S -D. This would delete the file so you could copy in a new one.

Execute SQL from file in SQLAlchemy

How can I execute whole sql file into database using SQLAlchemy? There can be many different sql queries in the file including begin and commit/rollback.
sqlalchemy.text or sqlalchemy.sql.text
The text construct provides a straightforward method to directly execute .sql files.
from sqlalchemy import create_engine
from sqlalchemy import text
# or from sqlalchemy.sql import text
engine = create_engine('mysql://{USR}:{PWD}#localhost:3306/db', echo=True)
with engine.connect() as con:
with open("src/models/query.sql") as file:
query = text(file.read())
con.execute(query)
SQLAlchemy: Using Textual SQL
text()
I was able to run .sql schema files using pure SQLAlchemy and some string manipulations. It surely isn't an elegant approach, but it works.
# Open the .sql file
sql_file = open('file.sql','r')
# Create an empty command string
sql_command = ''
# Iterate over all lines in the sql file
for line in sql_file:
# Ignore commented lines
if not line.startswith('--') and line.strip('\n'):
# Append line to the command string
sql_command += line.strip('\n')
# If the command string ends with ';', it is a full statement
if sql_command.endswith(';'):
# Try to execute statement and commit it
try:
session.execute(text(sql_command))
session.commit()
# Assert in case of error
except:
print('Ops')
# Finally, clear command string
finally:
sql_command = ''
It iterates over all lines in a .sql file ignoring commented lines.
Then it concatenates lines that form a full statement and tries to execute the statement. You just need a file handler and a session object.
You can do it with SQLalchemy and psycopg2.
file = open(path)
engine = sqlalchemy.create_engine(db_url)
escaped_sql = sqlalchemy.text(file.read())
engine.execute(escaped_sql)
Unfortunately I'm not aware of a good general answer for this. Some dbapi's (psycopg2 for instance) support executing many statements at a time. If the files aren't huge you can just load them into a string and execute them on a connection. For others, I would try to use a command-line client for that db and pipe the data into that using the subprocess module.
If those approaches aren't acceptable, then you'll have to go ahead and implement a small SQL parser that can split the file apart into separate statements. This is really tricky to get 100% correct, as you'll have to factor in database dialect specific literal escaping rules, the charset used, any database configuration options that affect literal parsing (e.g. PostgreSQL standard_conforming_strings).
If you only need to get this 99.9% correct, then some regexp magic should get you most of the way there.
If you are using sqlite3 it has a useful extension to dbapi called conn.executescript(str), I've hooked this up via something like this and it seemed to work: (Not all context is shown but it should be enough to get the drift)
def init_from_script(script):
Base.metadata.drop_all(db_engine)
Base.metadata.create_all(db_engine)
# HACK ALERT: we can do this using sqlite3 low level api, then reopen session.
f = open(script)
script_str = f.read().strip()
global db_session
db_session.close()
import sqlite3
conn = sqlite3.connect(db_file_name)
conn.executescript(script_str)
conn.commit()
db_session = Session()
Is this pure evil I wonder? I looked in vain for a 'pure' sqlalchemy equivalent, perhaps that could be added to the library, something like db_session.execute_script(file_name) ? I'm hoping that db_session will work just fine after all that (ie no need to restart engine) but not sure yet... further research needed (ie do we need to get a new engine or just a session after going behind sqlalchemy's back?)
FYI sqlite3 includes a related routine: sqlite3.complete_statement(sql) if you roll your own parser...
You can access the raw DBAPI connection through this
raw_connection = mySqlAlchemyEngine.raw_connection()
raw_cursor = raw_connection() #get a hold of the proxied DBAPI connection instance
but then it will depend on which dialect/driver you are using which can be referred to through this list.
For pyscog2, you can just do
raw_cursor.execute(open("my_script.sql").read())
but pysqlite you would need to do
raw_cursor.executescript(open("my_script").read())
and in line with that you would need to check the documentation of whichever DBAPI driver you are using to see if multiple statements are allowed in one execute or if you would need to use a helper like executescript which is unique to pysqlite.
Here's how to run the script splitting the statements, and running each statement directly with a "connectionless" execution with the SQLAlchemy Engine. This assumes that each statement ends with a ; and that there's no more than one statement per line.
engine = create_engine(url)
with open('script.sql') as file:
statements = re.split(r';\s*$', file.read(), flags=re.MULTILINE)
for statement in statements:
if statement:
engine.execute(text(statement))
In the current answers, I did not found a solution which works when a combination of these features in the .SQL file is present:
Comments with "--"
Multi-line statements with additional comments after "--"
Function definitions which have multiple SQL-queries ending with ";" butmust be executed as a whole statement
A found a rather simple solution:
# check for /* */
with open(file, 'r') as f:
assert '/*' not in f.read(), 'comments with /* */ not supported in SQL file python interface'
# we check out the SQL file line-by-line into a list of strings (without \n, ...)
with open(file, 'r') as f:
queries = [line.strip() for line in f.readlines()]
# from each line, remove all text which is behind a '--'
def cut_comment(query: str) -> str:
idx = query.find('--')
if idx >= 0:
query = query[:idx]
return query
# join all in a single line code with blank spaces
queries = [cut_comment(q) for q in queries]
sql_command = ' '.join(queries)
# execute in connection (e.g. sqlalchemy)
conn.execute(sql_command)
Code bellow works for me in alembic migrations
from alembic import op
import sqlalchemy as sa
from ekrec.common import get_project_root
def upgrade():
path = f'{get_project_root()}/migrations/versions/fdb8492f75b2_.sql'
op.execute(open(path).read())
I had success with David's answer here, with two slight modifications:
Use get_bind() as I was working with a Session rather than an Engine
Call cursor() on the raw connection
raw_connection = myDbSession.get_bind().raw_connection()
raw_cursor = raw_connection.cursor()
raw_cursor.execute(open("my_script.sql").read())