How do I connect to Azure SQL Database with Ballerina - azure-sql-database

I am using this mssql Ballerina package to connect to my Azure SQL database.
Error:
error: Error in SQL connector configuration: Failed to initialize pool: Connection reset ClientConnectionId:5d5b4f8b-7e6e-43d0-bce3-5fe79e403088 Caused by :Connection reset ClientConnectionId:5d5b4f8b-7e6e-43d0-bce3-5fe79e403088 Caused by :Connection reset
at ballerinax.mssql.1:createClient(Client.bal:173)
ballerinax.mssql.1:init(Client.bal:49)
viggnah.test_apis.0.$anonType$_1:$get$test(testAPI.bal:45)
Code:
import ballerina/sql;
import ballerina/http;
import ballerinax/mssql.driver as _;
import ballerinax/mssql;
configurable int servicePort = 9090;
configurable string user = "admin#my-server";
configurable string host = "my-server.database.windows.net";
configurable string database = "my-db";
configurable int dbPort = 1443;
configurable string password = "***";
service / on new http:Listener(servicePort) {
resource function get test(string id) returns error? {
mssql:Client|sql:Error dbClient = new(host=host, user=user, password=password, database=database, port=dbPort);
if dbClient is error {
return dbClient;
}
}
}

I created azure SQL database with following firewall rules:
Added clint IP to the firewall. I run the below code to connect Azure SQL database with Ballerina.
import ballerina/sql;
import ballerinax/mssql.driver as _;
import ballerinax/mssql;
string clientStorePath = "/path/to/keystore.p12";
string trustStorePath = "/path/to/truststore.p12";
mssql:Options mssqlOptions = {
secureSocket: {
encrypt: true,
-trustServerCertificate: false,
key: {
path: clientStorePath,
password: "password"
},
cert: {
path: trustStorePath,
password: "password"
}
}
};
mssql:Client|sql:Error dbClient = new(host="<serverName>.database.windows.net", user="<username>", password="<password>", database="<dbName>", port=1433);
It connected successfully without any error.
Image for reference:
I try this another way using below code:
import ballerina/sql;
import ballerinax/mssql.driver as _;
import ballerinax/mssql;
mssql:Client|sql:Error dbClient = new(host="<serverName>.database.windows.net", user="<username>", password="<password>", database="<dbName>", port=1433);
By using above also my azure SQL database is connected successfully.
Image for reference:
It worked for me. Once check from your end.

Related

mssql in nodejs not connecting

Trying to connect to my database to create a local website but having connection issues.
Running in node.js, this is my server.js file:
let sql = require('mssql');
let config = {
user: 'sa',
password: 'password',
server: '10.0.1.130\\SQLSERVER',
database: 'MY_DB',
};
function connect() {
let dbConn = new sql.ConnectionPool(config);
dbConn.connect()
}
connect();
Upon running this I get the following error:
ConnectionError: Failed to connect to 10.0.1.130:undefined - self signed certificate
code: 'ESOCKET'
Not sure why it is removing \SQLSERVER but that seems to be my issue.
I know the credentials are all correct as I connect to this on another computer (ubuntu) but its being removed and we want to move it to windows, have not been able to connect to this point.
Any suggestions are appreciated.

Terraform - Failed to set up SSH tunneling for host

Hell, I am trying to deploy rke k8s with terraform, but I am not able to connect to the desired host via ssh:
time="2022-02-28T11:17:38+01:00" level=warning msg="Failed to set up SSH tunneling for host [poc-k8s.my-domain.com]: Can't retrieve Docker Info: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info\": Unable to access node with address [poc-k8s.my-domain.com:22] using SSH. Please check if you are able to SSH to the node using the specified SSH Private Key and if you have configured the correct SSH username. Error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain"
and this is the .tf file I am using:
terraform {
required_providers {
rke = {
source = "rancher/rke"
version = "1.3.0"
}
}
}
provider "rke" {
log_file = "rke_debug.log"
}
resource "rke_cluster" "cluster" {
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["controlplane", "worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
addons_include = [
"https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml",
"https://gist.githubusercontent.com/superseb/499f2caa2637c404af41cfb7e5f4a938/raw/930841ac00653fdff8beca61dab9a20bb8983782/k8s-dashboard-user.yml",
]
}
resource "local_file" "kube_cluster_yaml" {
filename = "~/.kube/kube_config_cluster.yml"
sensitive_content = "rke_cluster.cluster.kube_config_yaml"
}
The key if of course correct and I am able to connect to the desired host:
ssh -i ~/.ssh/root_key root#poc-k8s.my-domain.com
what am I missing here?
[Update]
Cluster resource has delay_on_creation property that can be used
resource "rke_cluster" "cluster" {
delay_on_creation = 180
(...)
}
I'm facing a similar issue. On the second run of terrafor apply it works correctly. In my case the issue is that docker is not up fast enough for RKE provider.
I've found following workaround from citynetwork /
citycloud-examples:
resource "rke_cluster" "cluster" {
(...)
depends_on = [null_resource.wait-for-docker]
}
resource "null_resource" "wait-for-docker" {
provisioner "local-exec" {
command = "sleep 180"
}
depends_on = [
# list of servers docker being installed on
(...)
]
}
It waits for 180s which is not ideal, though.

Terraform cannot connect Chef provisioner with ssh

I cannot get terraform's ssh to connect via private aws keypair for chef provisioning - the error looks to just be a timeout:
aws_instance.app (chef): Connecting to remote host via SSH...
aws_instance.app (chef): Host: 96.175.120.236:32:
aws_instance.app (chef): User: ubuntu
aws_instance.app (chef): Password: false
aws_instance.app (chef): Private key: true
aws_instance.app (chef): SSH Agent: true
aws_instance.app: Still creating... (5m30s elapsed)
Error applying plan:
1 error(s) occurred:
* dial tcp 96.175.120.236:32: i/o timeout
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Here is my terraform plan - note the ssh settings.. the key_name setting is set to my AWS keypair name and the ssh_for_chef.pem is the private key
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
provider "aws" {
region = "us-east-1"
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
}
resource "aws_instance" "app" {
ami = "ami-88aa1ce0"
count = "1"
instance_type = "t1.micro"
key_name = "ssh_for_chef"
security_groups = ["sg-c43490e1"]
subnet_id = "subnet-75dd96e2"
associate_public_ip_address = true
provisioner "chef" {
server_url = "https://api.chef.io/organizations/xxxxxxx"
validation_client_name = "xxxxxxx-validator"
validation_key = "/home/user01/Documents/Devel/chef-repo/.chef/xxxxxxxx-validator.pem"
node_name = "dubba_u_7"
run_list = [ "motd_rhel" ]
user_name = "user01"
user_key = "/home/user01/Documents/Devel/chef-repo/.chef/user01.pem"
ssl_verify_mode = "false"
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("/home/user01/Documents/Devel/ssh_for_chef.pem")}"
}
}
Any ideas?
I'm not sure if we had the same problem, since you didn't specify if you were able to ssh to the instance.
In my case, I was running terraform from within the VPC, and the connection was allowed with a security groups, which can't be used with a public IP.
the solution is simple (but you will have to use the new conditional interpolations of terraform v.0.8.0) -
Define this variable - variable use_public_ip { default = true }
Then, inside the connection section of the chef provisioner, add the following line -
host = "${var.use_public_ip ? aws_instance.instance.public_ip : aws_instance.instance.private_ip}"
If you wish to use the public IP, set the variable as true, otherwise, set it to false.
I use this for aws -
connection {
user = "ubuntu"
host = "${var.use_public_ip ? aws_instance.instance.public_ip : aws_instance.instance.private_ip}","
}

How to create a jenkins credentials via API?

Does anybody know how to create a new jenkins (2.8) credentials (f.e for a git access) via API or POST request in Jenkins? I have tried to use this code (from another stackoverflow topic), but it does nothing:
import json
import requests
def main():
data = {
'credentials': {
'scope': "GLOBAL",
'username': "jenkins",
'privateKeySource': {
'privateKey': "-----BEGIN RSA PRIVATE KEY-----\nX\n-----END RSA PRIVATE KEY-----",
'stapler-class': "com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey$DirectEntryPrivateKeySource"
},
'stapler-class': "com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey"
}
}
payload = {
'json': json.dumps(data),
'Submit': "OK",
}
r = requests.post("http://%s:%d/credential-store/domain/_/createCredentials" % (localhost, 8080), data=payload)
if r.status_code != requests.codes.ok:
print r.text
I did it this way:
java -jar /tmp/jenkins-cli.jar -s http://localhost:8080/ \
groovy /tmp/credentials.groovy id username password
credentials.groovy
import jenkins.model.*
import com.cloudbees.plugins.credentials.*
import com.cloudbees.plugins.credentials.common.*
import com.cloudbees.plugins.credentials.domains.*
import com.cloudbees.plugins.credentials.impl.*
domain = Domain.global()
store = Jenkins.instance.getExtensionList('com.cloudbees.plugins.credentials.SystemCredentialsProvider')[0].getStore()
usernameAndPassword = new UsernamePasswordCredentialsImpl(
CredentialsScope.GLOBAL,
args[0],
"",
args[1],
args[2]
)
store.addCredentials(domain, usernameAndPassword)
I ran into the same issue and after a bit of digging/testing it seems you need to change this
/credential-store/domain/_/createCredentials
to this
/credentials/store/system/domain/_/createCredentials
It's doesn't work: /credential-store/domain/_/api/json
You have to use this url: /credentials/store/system/domain/_/api/json

SSL options in gocql

In my Cassandra config I have enabled user authentication and connect with cqlsh over ssl.
I'm having trouble implementing the same with gocql, following is my code:
cluster := gocql.NewCluster("127.0.0.1")
cluster.Authenticator = gocql.PasswordAuthenticator{
Username: "myuser",
Password: "mypassword",
}
cluster.SslOpts = &gocql.SslOptions {
CertPath: "/path/to/cert.pem",
}
When I try to connect I get following error:
gocql: unable to create session: connectionpool: unable to load X509 key pair: open : no such file or directory
In python I can do this with something like:
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
USER = 'username'
PASS = 'password'
ssl_opts = {'ca_certs': '/path/to/cert.pem',
'ssl_version': PROTOCOL_TLSv1
}
credentials = PlainTextAuthProvider(username = USER, password = PASS)
# define host, port, cqlsh protocaol version
cluster = Cluster(contact_points= HOST, protocol_version= CQLSH_PROTOCOL_VERSION, auth_provider = credentials, port = CASSANDRA_PORT)
I checked the gocql and TLS documentation here and here but I'm unsure about how to set ssl options.
You're adding a cert without a private key, which is where the "no such file or directory" error is coming from.
Your python code is adding a CA; you should do the same with the Go code:
gocql.SslOptions {
CaPath: "/path/to/cert.pem",
}