In Karate script is there a way to cache the DB connections ? . To be more specific the DB connections are through a Java program , every time we make DB calls the connection call is also
def dbDemo=Java.type('tests.DataBaseAssertions')
The above line of code is used in all the feature files . Is there a way to cache this object so that all script can refer to that .
Application level
Sounds like you are looking for the callSingle() syntax, please refer to the docs:
https://github.com/intuit/karate#hooks
var result = karate.callSingle('classpath:jdbc.feature');
Related
I'm currently exploring implementing hooks in some of my DAGs. For instance, in one dag, I'm trying to connect to s3 to send a csv file to a bucket, which then gets copied to a redshift table.
I have a custom module written which I import to run this process. I am trying to currently set up an S3Hook to undergo this process instead. But I'm a little confused in setting up the connection, and how everything works.
First, I input the hook
from airflow.hooks.S3_hook import S3Hook
Then I try to make the hook instance
s3_hook = S3Hook(aws_conn_id='aws-s3')
Next I try to set up the client
s3_client = s3_hook.get_conn()
However when I run the client line above, I received this error
OperationalError: (sqlite3.OperationalError)
no such table: connection
[SQL: SELECT connection.password AS connection_password, connection.extra AS connection_extra, connection.id AS connection_id, connection.conn_id AS connection_conn_id, connection.conn_type AS connection_conn_type, connection.description AS connection_description, connection.host AS connection_host, connection.schema AS connection_schema, connection.login AS connection_login, connection.port AS connection_port, connection.is_encrypted AS connection_is_encrypted, connection.is_extra_encrypted AS connection_is_extra_encrypted
FROM connection
WHERE connection.conn_id = ?
LIMIT ? OFFSET ?]
[parameters: ('aws-s3', 1, 0)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
I'm trying to diagnose the error, but the tracebook is long. I'm a little confused on why sqlite3 is involved here, when I'm trying to utilize s3 here. Can anyone unpack this? Why is this error being thrown when trying to set up the client?
Thanks
Airflow is not just a library - it's also an application.
To execute Airflow code you must have airflow instance running this mean also having a database with the needed schema.
To create the tables you must execute airflow init db.
Edit:
After the discussion in comments. Your issue is that you have working Airflow application inside docker but your DAGs are written on your local disk. Docker is closed environment if you want Airflow to recognize your dags you must move the files to the DAG folder in the docker.
I have developed a script which executes against one DB instance e.g.: db1. The code to connect to DB is written in Background section. Now what i want to do is, i have to execute same test script against diffrent db instance e.g.:db2
Feature:Execution against multiple DB instance.
##############################################
Background:
* def db_properties = {db_username,db_password,db_connection_string,driver}
* def createConnection = path to read .java file
* def readFromDB = new createConnection(db_properties)
##############################################
In * def db_properties, i have hard coded the actual values of username, password, conenction string and driver.What exactly i want to do is, i have to validate my API response agains't another DB instance e.g. build is deployed in another environment, and db properties which i have mentioned is diffrent environment. How can i do it?
This has nothing to do with Karate. Maybe the solution is to have 2 sets of DB connection values in your karate-config.js. Please figure out a solution that is appropriate for your situation.
In Amazon Neptune I would like to run multiple Gremlin commands in Java as a single transactions. The document says that tx.commit() and tx.rollback() is not supported. It suggests this - Multiple statements separated by a semicolon (;) or a newline character (\n) are included in a single transaction.
Example from the document show that Gremlin is supported in Java but I don't understand how to "Multiple statements separated by a semicolon"
GraphTraversalSource g = traversal().withRemote(DriverRemoteConnection.using(cluster));
// Add a vertex.
// Note that a Gremlin terminal step, e.g. next(), is required to make a request to the remote server.
// The full list of Gremlin terminal steps is at https://tinkerpop.apache.org/docs/current/reference/#terminal-steps
g.addV("Person").property("Name", "Justin").next();
// Add a vertex with a user-supplied ID.
g.addV("Custom Label").property(T.id, "CustomId1").property("name", "Custom id vertex 1").next();
g.addV("Custom Label").property(T.id, "CustomId2").property("name", "Custom id vertex 2").next();
g.addE("Edge Label").from(g.V("CustomId1")).to(g.V("CustomId2")).next();
The doc you are referring is for using the "string" mode for query submission. In your approach you are using the "bytecode" mode by using the remote instance of the graph traversal source (the "g" object). Instead you should submit a string script via the client object
Client client = gremlinCluster.connect();
client.submit("g.V()...iterate(); g.V()...iterate(); g.V()...");
Gremlin sessions
Java Example
After getting the cluster object,
String sessionId = UUID.randomUUID().toString();
Client client = cluster.connect(sessionId);
client.submit(query1);
client.submit(query2);
.
.
.
client.submit(query3);
client.close();
When you run .close() all the mutations get committed.
You can also capture the response from the Query reference.
List<Result> results = client.submit(query);
results.stream()...
You can also use the SessionedClient, which will run all queries in the same transaction upon close().
More information is here: https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-sessions.html#access-graph-gremlin-sessions-glv
tl;dr
After (manually) having updated the JDBC connection properties of a single SoapUI test step,
how can I copy them to the other test steps in the project (without resorting to ${property} expansion)?
I suppose Groovy is the key?
Background
I have a SoapUI Project containing many JDBC test steps pointing to my development database like that:
The Open source version of JDBC TestStep has fields for setting the
connection properties and the SQL query manually.
Getting Started | JDBC (SoapUI.org)
Constraint: I am currently working without having the Connections feature from Smartbear's Pro version available.
Goal
Before deploying, I want to run the same tests in our staging environment i.e. I have to change JDBC connection settings throughout the test suite(s).
Preliminary considerations:
In order to re-direct all JDBC steps to the staging database I could edit my tests to connection string and driver fields relying on property expansion like described in SOAPUI ability to switch between database connections for test suite.
Specific approach:
However in this case here, I need to see the connection strings and drivers directly on the test steps (in contrast to seeing just the ${expansion} variables) – Rationale: it gives more useful screenshots with the real values ...
The connection properties can be copied from one test step to other JDBC test steps in the project using the following Groovy script:
// Select "correctly configured" JDBC TestStep/Case/Suite to be used as reference
def s = testRunner.testCase
.testSuite
.project
.testSuites["Reference TestSuite"]
.testCases["Reference TestCase"].getTestStepAt(1)
log.info "${s.getConnectionString()}, ${s.getDriver()}"
// Use s to configure all JDBC TestSteps in current TestSuite
testRunner.testCase
.testSuite
.testCases
.each{iTC, testCase ->
//log.debug "${iTC}: ${testCase}"
testCase.getTestStepsOfType(com.eviware.soapui.impl.wsdl.teststeps.JdbcRequestTestStep)
.each{testStep ->
testStep.setConnectionString(s.getConnectionString())
testStep.setDriver(s.getDriver())
log.info "${testStep.getConnectionString()}, ${testStep.getDriver()}"
}
}
To run this, I have introduced an additional test suite internal TS and test case internal TC, respectively. I have added a Groovy TestStep copyJdbcSettings with above script to internal TC and executed it once.
I have then disabled internal TS until I need it again someday.
I am using the prepared statement like this
PreparedStatement pstmt = getConnection().prepareStatement(INSERT_QUERY);
pstmt.setInt(1,userDetails.getUsersId());
log.debug("SQL for inserting child transactions " + pstmt.toString());
I want to log the exact SQL statement after binding into the log file but this thing is not working. It is logging it something like SQLServerPreparedStatement:7. I searched on internet but did not get the satisfactory answer. Any help will be appreciated.
You can try to enable the JDBC internal logging by setting a PrintWriter on the DriverManager. Log4j2 provides an IO Streams module so you can include the output in your normal log file. Sample code:
PrintWriter logger = IoBuilder.forLogger(DriverManager.class)
.setLevel(Level.DEBUG)
.buildPrintWriter();
DriverManager.setLogWriter(logger);
In addition, individual JDBC drivers often provide proprietary logging mechanisms, usually enabled with a system property. You will need to consult the documentation for your specific driver for the details.
You can write a utility function which will replace the '?' with the bind variables.