Laravel SQL Server connection with ENCRYPT=yes trustServerCertificate=true - sql

I got a ubuntu docker container which runs php 5.5.9, laraverl 5.2 which can connect successfully to SQL Server and get results back.
The docker image I am using is https://hub.docker.com/r/h2labs/laravel-mssql/
The problem I got is that the server uses encryption and I cant find how to pass the following parameters to the laravel connection string for mssql
ENCRYPT=yes;trustServerCertificate=true
My SQL Server connection string at present looks like this
DB_CONNECTION=sqlsrv
DB_HOST=sql.mydomain.com
DB_PORT=1433
DB_DATABASE=mydbname
DB_USERNAME=mysusername
DB_PASSWORD=mypass
My laravel database config looks like this
'sqlsrv' => [
'driver' => 'sqlsrv',
'host' => env('DB_HOST', 'localhost'),
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
'charset' => 'utf8',
'prefix' => '',
],
The SQL Server error log entry is
Encryption is required to connect to this server but the client library does not support encryption; the connection has been closed. Please upgrade your client library. [CLIENT: 103.31.114.56]

Support for either option was not introduced until Laravel 5.4; Specifically, v5.4.11
So you would first need to upgrade to laravel/framework:>=5.4.11,<5.5
Then, to configure your application, you will need to modify your config/database.php file as follows:
// ...
'sqlsrv' => [
'driver' => 'sqlsrv',
'host' => env('DB_HOST', 'localhost'),
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
'charset' => 'utf8',
'prefix' => '',
'encrypt' => 'yes', // alternatively, defer to an env variable
'trust_server_certificate' => 'true', // alternatively, defer to an env variable
],
// ...
DatabaseServiceProvider, via ConnectionFactory and SqlServerConnector will use this to build the underlying PDO connection with those options set in the DSN.

Related

Unable to run migrations on GCP with CakePHP 3.8

I am trying to set up my CakePHP 3.8 project on a GCP "Compute Engine" VM.
I have set up my app.php to use the following DB configuration:
'className' => 'Cake\Database\Connection',
'driver' => 'Cake\Database\Driver\Mysql',
'persistent' => false,
'datasource' => 'Database/Mysql',
'persistent' => false,
'host' => 'localhost',
'username' => 'user',
'password' => 'password',
'database' => 'dbname',
'prefix' => '',
'encoding' => 'utf8',
'timezone' => 'UTC',
'cacheMetadata' => true,
'log' => false,
'flags' => [
PDO::MYSQL_ATTR_INIT_COMMAND => "SET ##SESSION.sql_mode='';",
// uncomment below for use with Google Cloud SQL
PDO::MYSQL_ATTR_SSL_KEY => CONFIG.'ssl/client-key.pem',
PDO::MYSQL_ATTR_SSL_CERT => CONFIG.'ssl/client-cert.pem',
PDO::MYSQL_ATTR_SSL_CA => CONFIG.'ssl/server-ca.pem',
PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT => false
],
'cacheMetadata' => true,
'log' => false,
My problem happens when I try to run migrations. The site works just fine with the above configuration, however, if I run
$> php bin/cake.php migrations migrate
I get a slew of errors saying that it cannot connect, access denied for user#host.
If I add
'ssl_key' => CONFIG .'ssl/client-key.pem',
'ssl_cert' => CONFIG . 'ssl/client-cert.pem',
'ssl_ca' => CONFIG . 'ssl/server-ca.pem',
I get an error:
Caused by: [PDOException] PDO::__construct(): Peer certificate CN=`gcpname:gcpserver' did not match expected CN=`111.111.111.111' in /var/www/mydomain.com/vendor/robmorgan/phinx/src/Phinx/Db/Adapter/PdoAdapter.php on line 79
I guess this is because the migrations plugin still doesn't pass the flags or custom mysql_attr_* options over to the Phinx connection configuration, see this issue:
https://github.com/cakephp/migrations/issues/374
I don't think there's much that can be done here, other than adding support for flags / attribute options, or using Phinx directly (ie without the Migrations plugin).
I've pushed a PR that would add support for driver specific flags, you might want to give it a try and comment on the issue or the PR whether it works for you (it's for CakePHP 4.x (Migrations 3.x), I'll backport it for CakePHP 3.x (Migrations 2.x) in case it's being accepted):
https://github.com/cakephp/migrations/pull/478

Cakephp + RDS + SSL: certificate verify failed

I have a Cakephp3.8 website, connected to a RDS database. I am trying to use an SSL database connection.
I got the pem certificate from AWS. I have created a test user with access to my database, and this user is set up to require SSL.
I can successfully connect to the database with my user from the command line:
mysql -u ssl-user -p -h xxxxx.xxxxx.ap-southeast-2.rds.amazonaws.com --ssl-ca=./rds-ca-2019-root.pem
I have set up my database connection in CakePHP as follows:
'Datasources' => [
'default' => [
'className' => 'Cake\Database\Connection',
'driver' => 'Cake\Database\Driver\Mysql',
'persistent' => false,
'host' => 'xxxxx.xxxxx.ap-southeast-2.rds.amazonaws.com',
'username' => 'sl-user',
'password' => 'xxxxxxx',
'database' => 'xxxxxxx',
'ssl_ca' => '/var/www/rds-ca-2019-root.pem',
'encoding' => 'utf8',
'timezone' => 'UTC',
'flags' => [],
'cacheMetadata' => true,
'log' => false,
'quoteIdentifiers' => true,
'url' => env('DATABASE_URL', null),
],
],
With the above setup I the connection fails and I get the following error:
Error: [PDOException] SQLSTATE[HY000] [2002]
Caused by: [PDOException] PDO::_construct(): SSL operation failed with code 1. OpenSSL Error messages:
error:1416F086:SSL routines:tls_process_server_sertificate:certificate verify failed (/var/www/vendor/cakephp/cakephp/src/Database/Driver.php:92)
Any ideas why CakePHP can't connect?
Actually realised that the RDS server was running MariaDB 10.3.x. AWS provide specific docs for MariaDB: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MariaDB.html#MariaDB.Concepts.SSLSupport
The solution for me was to use the combined certificate:
https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem

INSERT data from database into another database, running on different hosts

I'm facing the following problem:
I have two MariaDB databases, running on two different hosts. Both of them are used to run two different websites, each of them having Drupal and CiviCRM installed and running.
Some of the data stored in the contacts table of CiviCRM from website 1 needs to be kept in sync with these same contacts on website 2.
Keeping in sync means : inserting new contacts, and updating existing contacts.
I was wondering if this coud be done via trigger?
I know I can activate remote sql on my cPanel, as I use this to work with Mysql Workbench or similar software.
Any ideas? Would a trigger work? Do I rather need to write some code in another language than SQL?
You can add multiple databases at the same time for your Drupal to connect to in your settings.php:
$databases = [
'HOST1.DATABASE' => [
'default' => [
'driver' => 'mysql',
'username' => '',
'password' => '',
'host' => '127.0.0.1',
'port' => '3306',
'prefix' => '',
'database' => 'contacts',
'collation' => 'utf8mb4_general_ci',
],
],
'HOST2.DATABASE' => [
'default' => [
'driver' => 'mysql',
'username' => '',
'password' => '',
'host' => '127.0.0.1',
'port' => '3306',
'prefix' => '',
'database' => 'contacts_audit',
'collation' => 'utf8mb4_general_ci',
],
],
];
After this you can define in the getConnection() method, which key of the $database array you want to connect.
\Drupal\Core\Database\Database::getConnection('HOST1.DATABASE')
->query('CREATE TRIGGER contacts_after_update AFTER UPDATE ON contacts FOR EACH ROW BEGIN')
->execute();
and
\Drupal\Core\Database\Database::getConnection('HOST2.DATABASE')
->query('INSERT INTO contacts_audit ( contact_id, updated_date, updated_by) VALUES ( NEW.contact_id, SYSDATE(), ); END;')
->execute();
(If you leave the parameter of getConnection() empty, it would connect to the database on $databases['default'] key. Also, you can use setActiveConnection() if you want to work more with the database, which as its name says, sets the active connection to the desired key of $databases)
Hope this helps some way.

How to set http timeouts for Amazon AWS SDK for PHP

I'm using the Amazon AWS SDK for PHP (namely, version 2.7.16) to upload files to an S3 bucket. How can I set a timeout for http/tcp operations (connection, upload, etc.)? Although I've googled a lot I wasn't able to find out how.
Sample code I'm using:
$awsS3Client = Aws\S3\S3Client::factory(array(
'key' => '...',
'secret' => '...'
));
$awsS3Client->putObject(array(
'Bucket' => '...',
'Key' => 'destin/ation.file',
'ACL' => 'private',
'Body' => 'content'
));
so I'd like to set a timeout on the putObject() call.
Thanks!
Eventually I helped myself:
$awsS3Client = Aws\S3\S3Client::factory(array(
'key' => '...',
'secret' => '...'
'curl.options' => array(
CURLOPT_CONNECTTIMEOUT => 5,
CURLOPT_TIMEOUT => 10,
)
));
Looks like AWS PHP uses curl internally, so network related options are set this way.
With SDK version 3 this can be configured using the http configuration key.
$awsS3Client = Aws\S3\S3Client([
'key' => '...',
'secret' => '...',
'http' => [
'connect_timeout' => 5,
'timeout' => 10,
]
]);

Don't use prepared statements in Laravel Eloquent ORM?

Can I have Eloquent ORM run a query without using prepared statements? Or do I have to use whereRaw()?
I need to use a raw query because I'm trying to interact with InfiniDB, which lacks support for prepared statements from PHP. At any rate, all queries will be using internally generated data, not user input so it should not be a security issue.
For anything other than SELECT you can use unprepared()
DB::unprepared($sql);
For an unprepared SELECT you can use plain PDO query() by getting access to active PDO connection through getPdo()
$pdo = DB::getPdo();
$query = $pdo->query($sql);
$result = $query->fetchAll();
There's an easy way to do it. In the file config/database.php you can specify options for php PDO like so:
'mysql_unprepared' => [
'driver' => 'mysql',
'host' => env('DB_HOST', '127.0.0.1'),
'port' => env('DB_PROXY_PORT', '6033'),
'username' => env('DB_CACHED_USERNAME', 'forge'),
'password' => env('DB_CACHED_PASSWORD', ''),
'database' => env('DB_DATABASE', 'forge'),
'unix_socket' => env('DB_SOCKET', ''),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'prefix_indexes' => true,
'strict' => true,
'engine' => null,
'options' => extension_loaded('pdo_mysql') ? [
PDO::ATTR_EMULATE_PREPARES => true,
] : [],
'modes' => [
'ONLY_FULL_GROUP_BY',
'STRICT_TRANS_TABLES',
'NO_ZERO_IN_DATE',
'NO_ZERO_DATE',
'ERROR_FOR_DIVISION_BY_ZERO',
'NO_ENGINE_SUBSTITUTION',
],
],
As you can see, there is an option PDO::ATTR_EMULATE_PREPARES which, when set to true, will do a prepare-like action on application level and send the query unprepared instead. I didn't figure PDO had this option until I had already created an extension for Laravel's mysql driver just to intercept select queries and do unprepared mysqli queries instead so that ProxySql could cache them.
So this answer could have been a lot more complicated. Cheers.