Default SQL Server Session for Intellij - intellij-idea

When opening a .bdy(/sql/vew/..)-File in IntelliJ, it always greets me with semantic errors for almost every line. That is because it needs a dbsession to check the references against. DataGrip behaves identical.
For reference:
Can I somehow state a default here per file/dir/proj/global?

It's in File | Settings | Languages & Frameworks | SQL Resolution Scopes, here you can specify global Project Mapping to data source/database/schema, or define mapping for any directory/file.
in DataGrip setting path will be as follow File | Settings | Database | SQL Resolution Scopes.

Related

how to load / mount an existing file based database in HSQLdb

Good day all !
I have 2.5 HSQLDB running as server on my Windows machine via:
#java -classpath ./lib/hsqldb.jar org.hsqldb.server.Server -database.0 file:.\data -dbname.0 foo
I left the files of my foo database in the data folder:
foo.data
foo.log (empty BTW)
foo.properties
foo.script
... and I see this line:
[Server#74cd4d]: Database [index=0. id=0, db=file:.\data, alias=foo] opened successfully in 335ms.
I open the manager via:
#java classpath .\lib\hsqldb.jar org.hsqldb.util.DatabaseManagerSwing
connection properties:
Recent Settings: foo
Setting Name: foo
Type: HSQL Database Engine Server
Driver: org.hsqldb.jdbc.JDBCDriver
URL: jdbc:hsqldb:hsql://localhost/foo
I run the query to show all tables (which I know for a fact exists):
SELECT * FROM INFORMATION_SCHEMA.SYSTEM_TABLES where TABLE_TYPE='TABLE'
... but I get zero results.
Any ideas?
Thanks!
PS I had to spend an hour figuring out why I hit the post button and it complained I did not indent my code correctly so I indented the whole thing and was then forced to type this long diatribe in order to get my post to be accepted ...
... God bless the pedantic "Knights who say Ni" !
The file: URL among the server properties indicates the path and file name of the database files (but without the file extensions such as .script, .data).
The URL that you used in db=file:.\data refers to database files named data.script, data.properties, etc. in the parent directory of lib. You can check the parent directory and will probably find a set of database files with those names was created when you started the server.
In order to access you proper database, the URL should be db=file:.\data\foo

Jenkins Allure report is not showing all the results when we have multiple scenarios

I have the following scenario outline and I am generating allure report, But in the report we are not getting all the scenarios data, its showing only last run data.
It is showing only | uat1_nam | Password01 | test data result
Jenkins plugin version i am using is 2.13.6
Scenario Outline: Find a transaction based on different criteria and view the details
Given I am login to application with user "<user id>" and password "<password>"
When I navigate to Balances & Statements -> Find a transaction
Then I assert I am on screen Balances & Statements -> Find a transaction -> Find a transaction
#UAT1
Examples:
| user id | password |
| uat1_moz | Password01 |
| uat1_nam | Password01 |
I got a similar issue.
We are running test with the same software on Linux and Windows, and generating results into 2 separate folders.
Then we have:
allure-reports
|_ linux_report
|_ windows_report
Then we are using the following command in the Jenkinsfile:
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'allure-reports/linux-report'], [path: 'allure-reports/windows-report']]
])
Similar to Sarath, only the results of the from the last run are available...
I also tried to run the cli directly on my machine, same results.
allure serve allure-reports/linux-report allure-reports/windows-report
I already found many methods, actually this one is very similar to my use case, but I do not understand why it works here, and not for me...
https://github.com/allure-framework/allure2/issues/1051
I also tried with the following method, but the Docker container is not running properly on Linux, due to permission issues... But I am running the container from a folder where I got all permissions. Same results if I give my userID in parameters:
https://github.com/fescobar/allure-docker-service#MULTIPLE-PROJECTS---REMOTE-REPORTS
I was able to get deeper into the topic, and I am finally able to prove why the data are overwritten.
I used a very simple example to generate 2 different reports, where only allure.epic was different.
As I thought, if we generate 2 different reports, with the same source folder, but generate 2 separate reports, then only the latest report will be considered (allure.epic name was updated in between).
If I have 2 different folders, with the same code (but only the allure.epic is different), then I have the all data available, and stored in different Suites!
Then, to make sure that allure considers the reports as different, and make a different classification for each OS, we have to make tests on code which is stored in different locations. Which does not fit with my usecase, as the same code is tested on both Linux and Windows.
Or maybe is there an option for pytest-allure to specify the root classification?

pytest, any way to include a test file or list of test files?

I am looking for best practices advices regarding the following context:
I am using pytest to run integration tests on my IAC deployment
My IAC code base is structured as:
myapp
|
|_roles
| |_role1
| |_role2
|_resources
|_tomcat
|_java
I'd like to use the same kind of structure for my test files.
Tests are currently divided in file matching roles (role1, role2):
tests
|
|_roles
|_test_role1.py
|_test_role2.py
which lead to duplicated code, e.g:
role1 is a tomcat base app,
role2 holds pure java code,
So in both test files (test_role1.py and test_role2.py) there will be a java test function.
If I could add a dir structure as:
tests
|
|_roles
| |_test_role1.py
| |_test_role2.py
|
|_resources
|_test_tomcat.py
|_test_java.py
Then I could just "include / import" the test_java.py functions to use them in test_role1.py and test_role2.py without duplicating code...
What's the best way to achieve this ?
I am already using fixtures (defined in conftest.py), and I feel that the solution to my duplicated code is something along fixture or test modules but my poor python / pytest knowledge is keeping me away from the actual solution.
Thanks
If you don't mind running your tests as a module, you could turn your Python files into packages by placing a file called 'init.py' in the root of the project, in the directory with the code to be tested and in the directory with the testing code.
You can then perform relative imports to access the functions you need:
eg to access "_test_java.py" from "_test_role2.py"
from ../_roles import _test_java
A single dot represent the current directory. Two dots represents the parent directory.
You will need to use the -m flag when calling your code so Python understands you are running a module with relative imports.
In your case you might consider performing the messy relative imports in conftest.py
This post explains the above in more detail:
http://blog.habnab.it/blog/2013/07/21/python-packages-and-you/

How to use a config file to connect to database set by user

I have a program that will run a query, and return results in report viewer. The issue is we have 10 locations, all with their own local database. What I'd like to do is have each location use the program and utilize the App.config file to specify which database to connect to depending on which location you are. This will prevent me from having to create 10 individual programs with separate database connections. I was thinking I could have 3 values in the app.config file "Database" "login" "password" Generally speaking the databases are on the .30 address... so it would be nice to be able to have them set the config file to the database server IP...
For example:
Location: 1
DatabaseIP: 10.0.1.30
Login: sa
Password: databasepassword
Is it possible to set something like this up using the app.config file?
You should take a look on the resource files.
Originally, they are intended for localization, but they should work for you also.
Go to your project Properties, and set up an Application Setting --> Type (Connection String) from the drop down. This will result in a xlm config file in your output directory in which you can modify the connection string post-compile.
I ended up using a simple XML File to do this. I used this site to accomplish it. I first wrote the XML using the form load, then switched it to the read.

boost build - sources with the same name

src
|--Manager.cpp
|--Specializations
| |--Manager.cpp
Building this Boost.Build tries to create
/bin/...
|--Manager.o
|--Manager.o
but fails. How to resolve this automatically? I read FAQ item, but I don't like the solution, as I have to fix things manually when I have a same class name, but different namespace. Would it be possible to make Boost.Build automatically prefix object file names with directory?
/bin/...
|--Manager.o
|--Specializations.Manager.o
Or duplicate the source directory tree?
/bin/...
|--Manager.o
|--Specializations
| |--Manager.o
This behavior has been changed a long time ago and should just work. Boost.Build now mimics the source structure, i.e. you should get both bin/Manager.o and bin/Specializations/Manager.o.