Connect App Engine to Google cloud SQL fails - kotlin

I'm following this guide
I'm filling the config like this:
val datasourceConfig = HikariConfig().apply {
jdbcUrl = "jdbc:mysql:///$DB_NAME"
username = DB_PASS
password = DB_USER
mapOf(
"cloudSqlInstance" to CLOUD_SQL_CONNECTION_NAME,
"socketFactory" to "com.google.cloud.sql.mysql.SocketFactory",
"ipTypes" to "PUBLIC,PRIVATE",
).forEach {
addDataSourceProperty(
it.key,
it.value
)
}
}
output of the gcloud sql instances describe project-name:
backendType: SECOND_GEN
connectionName: project-name:europe-west1:project-name-db
databaseVersion: MYSQL_5_7
failoverReplica:
available: true
gceZone: europe-west1-d
instanceType: CLOUD_SQL_INSTANCE
ipAddresses:
- ipAddress: *.*.*.*
type: PRIMARY
kind: sql#instance
name: project-name-db
project: project-name
region: europe-west1
from which I'm filling my env variables:
DB_NAME=project-name-db
CLOUD_SQL_CONNECTION_NAME=project-name:europe-west1:project-name-db
On the deployed app line val dataSource = HikariDataSource(datasourceConfig) crashes with the following exception:
com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Cannot connect to MySQL server on localhost:3,306.
Make sure that there is a MySQL server running on the machine/port you are trying to connect to and that the machine this software is running on is able to connect to this host/port (i.e. not firewalled). Also make sure that the server has not been started with the --skip-networking flag.
update: I've tried adding google between second and third slashes("jdbc:mysql://google/$DB_NAME"), according to this answer, now I get:
Cannot connect to MySQL server on google:3,306.

I was missing the following dependency:
implementation("com.google.cloud.sql:mysql-socket-factory-connector-j-8:1.2.2")
more info here
Also DB_NAME is not name of gcloud sql instances output, but a database name that should be created in Console -> Project -> Sql -> Databases

Related

Quarkus: Overwrite DEV profile config with empty values for Postgres properties

I'm using Quarkus (2.7.3.Final) with Postgres (quarkus-jdbc-postgresql).
And I really like Quarkus' approach that if you configure no username, password and url for your datasource it will try to start a testcontainer and emulate the database, when you start the app in development mode.
So for example if you define this in your application.yml (or application.properties), Quarkus will start a Postgres testcontainer for you, when you start the app with ./mvnw clean quarkus:dev:
quarkus:
datasource:
username:
password:
db-kind: postgresql
jdbc:
driver: org.postgresql.Driver
url:
The log says "Dev Services for the default datasource (postgresql) started."
Pretty neat! :-)
However, what I really want is to define my real/productive database connection settings in my application.yml. And then overwrite them in the application-dev.yml, so that only in the development mode the testcontainer is started:
application.yml with PROD settings:
quarkus:
datasource:
username: myuser
password: mypassword
db-kind: postgresql
jdbc:
driver: org.postgresql.Driver
url: jdbc:postgresql://hostname:5432/mydb
application-dev.yml with DEV settings:
quarkus:
datasource:
username:
password:
jdbc:
url:
But overwriting the properties with null values doesn't work, when I start the app in development mode I get the error:
Datasource '<default>': Connection to hostname:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
The overwriting itself works, if I change my application-dev.yml to use an embedded H2 instead of the implicit testcontainer, the application starts:
application-dev.yml with H2 settings:
quarkus:
datasource:
username: sa
password: mypassword
db-kind: h2
jdbc:
driver: org.h2.Driver
url: jdbc:h2:mem:mydb;DB_CLOSE_DELAY=-1
So my question is: How can I overwrite my datasource configuration with null values, so that Quarkus uses testcontainers in dev mode?
And by the way, switching from a application.yml to Quarkus default application.properties unfortunately did not help.
Thanks a lot!
Just to complete this: Combining the previous answers and comments using the prod profile this my solution:
application.yml with DEV settings:
quarkus:
datasource:
username:
password:
db-kind: postgresql
jdbc:
driver: org.postgresql.Driver
url:
application-prod.yml with PROD settings:
quarkus:
datasource:
username: myuser
password: mypassword
jdbc:
url: jdbc:postgresql://hostname:5432/mydb
The application-dev.yml isn't needed this way. Thanks folks! :-)
Following Quarkus' official documentation,
If a profile does not define a value for a specific attribute, the
default (no profile) value is used
This behaviour will be useful in many cases, but in yours might lead to the inability to override properties once defined in the default profile back to their empty state.
I would suggest you to swap your profiles around i.e. treat the null-valued dev configuration as a default and provide meaningful non-null prod values in an overriding profile.
If you are worried that dev values might be used this way accidentally in prod environment, remember that Quarkus is going to use prod profile by default if not told otherwise.

How to use EKS with suitable volumes and resolve subnet IP insufficient issue on AWS?

I deployed an application in EKS. The deployment always pending, when I checked the events found these issues.
$ kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
89s Warning FailedScheduling pod/awx-demo-111111111-122222 running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "awx-demo-projects-claim"
49m Warning FailedDeployModel ingress/awx-demo-ingress Failed deploy model due to InvalidSubnet: Not enough IP space available in subnet-031f9c702bc474e8f. ELB requires at least 8 free IP addresses in each subnet.
status code: 400, request id: 11111111-2222-3333-4444-555555555555
32m Warning FailedDeployModel ingress/awx-demo-ingress Failed deploy model due to InvalidSubnet: Not enough IP space available in subnet-01322i912fas0123na. ELB requires at least 8 free IP addresses in each subnet.
status code: 400, request id: 11111111-2222-3333-4444-555555555515
15m Warning FailedDeployModel ingress/awx-demo-ingress Failed deploy model due to InvalidSubnet: Not enough IP space available in subnet-031f9c702bc474e8f. ELB requires at least 8 free IP addresses in each subnet.
status code: 400, request id: 11111111-2222-3333-4444-555555555525
89s Normal WaitForPodScheduled persistentvolumeclaim/awx-demo-projects-claim waiting for pod awx-demo-111111111-122222 to be scheduled
21m Warning ProvisioningFailed persistentvolumeclaim/awx-demo-projects-claim Failed to provision volume with StorageClass "gp2": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce] are supported
It seems there are device issue and subnet issue. I created the EKS cluster and node group with these configurations:
resource "aws_eks_cluster" "this" {
encryption_config {
resources = ["secrets"]
provider {
key_arn = aws_kms_key.this.arn
}
}
enabled_cluster_log_types = ["api", "authenticator", "audit", "scheduler", "controllerManager"]
name = local.cluster_name
version = "1.20"
role_arn = aws_iam_role.eks_cluster.arn
vpc_config {
subnet_ids = [
data.aws_ssm_parameter.private_subnet_0_id.value,
data.aws_ssm_parameter.private_subnet_1_id.value,
]
security_group_ids = [aws_security_group.this.id]
endpoint_public_access = true
}
depends_on = [
aws_iam_role_policy_attachment.eks_cluster_policy,
aws_iam_role_policy_attachment.eks_vpc_resource_controller,
aws_iam_role_policy_attachment.eks_service_policy,
]
tags = merge(
local.tags,
)
}
resource "aws_eks_node_group" "this" {
cluster_name = local.cluster_name
node_group_name = local.node_group_name
node_role_arn = aws_iam_role.eks_nodes.arn
instance_types = ["m5.2xlarge"]
subnet_ids = [
data.aws_ssm_parameter.private_subnet_0_id.value,
data.aws_ssm_parameter.private_subnet_1_id.value,
]
scaling_config {
desired_size = 2
max_size = 2
min_size = 2
}
lifecycle {
ignore_changes = [scaling_config[0].desired_size]
}
depends_on = [
aws_iam_role_policy_attachment.eks_worker_node_policy,
aws_iam_role_policy_attachment.eks_cni_policy,
aws_iam_role_policy_attachment.ec2_container_register_readonly,
]
tags = merge(
local.tags,
)
}
I didn't define the volume type for EBS, maybe it's using the default setting. How to fix the issue?
For the VPC has insufficient IP addresses issue, if create a new subnet for EKS to use, is it necessary to delete the EKS cluster or node group?
By the way, the deployment I used was https://raw.githubusercontent.com/ansible/awx-operator/0.13.0/deploy/awx-operator.yaml.
The install was used https://github.com/ansible/awx-operator#basic-install.
#miantian, Continuing our discussion from the comments:
A subnet size cannot just be increased. If you change the subnet size, it will be recreated. But as the EKS is there, the subnet creation will fail. So, I would say - start fresh. Delete everything and then start fresh.
Regd the volume issue, by default EKS only supports ReadWriteOnce access mode. This is because of the technical limitation of AWS where an EBS volume can only be attached to 1 EC2 instance. If you want to use ReadWriteMany access mode, you need to use EFS.
If you want to use EFS, look up NFS/EFS client provisioner for EKS. There are few steps you need to follow in order to create an EFS provisioner in EKS. Then, you can start using ReadWriteMany access mode.

Establish Redis Connection in Phoenix

I need to establish a Redis connection when my Phoenix app initially loads. When reading the docs I thought that code would go in /config/dev.exs or /config/config.exs but the Redix dependency I am using as a Redis interface is not loaded in /config
Below results in a reference error in /config:
Redix.start_link("redis://localhost:6379/3", name: :redix)
I only want to call this once on app load. Where should I put this call in my Phoenix app?
Adding {Redix, name: :redix} to children array in application.ex adds redix process to the supervisor tree. Which means it will start along with your application:
children = [
# Start the Ecto repository
MyApp.Repo,
# Start the Telemetry supervisor
AppWeb.Telemetry,
# Start the PubSub system
....
# Single Redis connection
{Redix, name: :redix}
]
See https://hexdocs.pm/redix/real-world-usage.html
You can check in iex -S mix:
iex(1)> Redix.command(:redix, ["PING"])
{:ok, "PONG"}
Now you can use all the regular Redix commands: https://hexdocs.pm/redix/readme.html#usage

DDEV Prestashop database connection

I want to install a Prestashop with DDEV, but I can't connect to database.
I tried 127.0.0.1:32775 and localhost:32775, with "db" as user/db/password
But I get this error:
Database Server is not found. Please verify the login, password and server fields (DbPDO)
Database is up and running, connection via commandline is working:
mysql --host=127.0.0.1 --port=32775 --user=db --password=db --database=db
Project information:
PrestaShop 1.7.6.2 Installer (I first tried github/composer installation - error, then zip download with wizard - same error)
ddev version v1.11.2
DDEV project type: php
Host: MacOS 10.15.1
DDEV config.yaml - changes to default: router_http(s)_port
APIVersion: v1.11.2
name: prestatest
type: php
docroot: ""
php_version: "7.2"
webserver_type: nginx-fpm
router_http_port: "880"
router_https_port: "8443"
xdebug_enabled: false
additional_hostnames: []
additional_fqdns: []
mariadb_version: "10.2"
nfs_mount_enabled: false
provider: default
use_dns_when_possible: true
timezone: ""
ddev describe will show you the db connection information.
Host: db
User: db
Password: db
Database: db
Mostly people forget the hostname configuration.

APACHE DRILL:ISSUE connecting to hive with kerbros enabled

I Have a cluster which is kerbroized ,i have installed drill in another server and i am trying to use hive which is part of kerbrorized cluster .
As part of hive i have put below configuration on my drill-override.conf
drill.exec: {
security: {
# user.auth.enabled:true,
auth.mechanisms:["KERBEROS"],
auth.principal:"xxxx/xxxxxxxx",
auth.keytab:"/xxx/xxxx/drill.keytab"
drill.exec.http.ssl_enabled="true"
}
}
drill.exec:
{
cluster-id: "drillbits1",
zk.connect: "localhost:2181"
}
when i am accessing hive from drill ui ,getting below errors:
2017-04-07 12:32:48,322 [2718c667-5587-b307-58f7-b673e29b7dbf:frag:0:0] WARN o.a.d.e.s.h.schema.HiveSchemaFactory - Failure while getti
ng Hive database list.
org.apache.thrift.TException: java.util.concurrent.ExecutionException: MetaException(message:Got exception: org.apache.thrift.transport.
TTransportException null)
I have tried with drill version:1.5.0,1.10.0
Appriciate any help to resolve this issue.
The configuration you have mentioned inside drill-override.conf is for DrillClient to Drillbit connection using kerberos.
For Hive, I don't think we have tried it before, but based on some research I think you can try to add below in your Drill Hive Storage Plugin. Also make sure that you have generated a kerberos ticket on the Drillbit node using kinit command for the process user which you are using to run Drillbit. Please try and let us if it helps.
{
"type": "hive",
"enabled": true,
"configProps": {
"hive.metastore.uris": "thrift://<metastore_ip:port>",
"hive.metastore.sasl.enabled": "true",
"hive.metastore.kerberos.principal": "<metastore_kerberos_principal"
}
}