I used Google Cloud Platform App Engine and Google Cloud SQL, Sequelize, but Error: connect ETIMEDOUT error - express

The server is intended to be deployed via GCP, React is deployed as a separate instance.
I use Sequelize, Node, Express, MySQL.
Cloud SQL confirmed that Sequelize and DB were well linked through workbench.
I once distributed it through APP Engine, so I added Cloud SQL-related settings, and if there was no problem, I decided that it would be possible to call the desired data.
The deployment was successful through gcloud app deployment and tried to verify that the desired API was called well, but 500 errors were constantly occurring.
I don't know where to fix it, can you give me a tip?
config/config.js
require('dotenv').config();
module.exports = {
development: {
username: "root",
password: process.env.SEQUELIZE_PASSWORD,
database: "DBName",
host: "SQL Instance Public IP Address",
dialect: "mysql",
timezone: "+09:00"
},
test: {
username: "root",
password: null,
database: "database_test",
host: "127.0.0.1",
dialect: "mysql"
},
production: {
username: "root",
password: process.env.SEQUELIZE_PASSWORD,
database: "DBName",
host: "SQL Instance Public IP Address",
dialect: "mysql",
timezone: "+09:00",
logging: false
},
}
server.js
const express = require('express')
const cors = require('cors')
const dotenv = require('dotenv')
const Test = require('./Routes/Test')
const { sequelize } = require('./models')
dotenv.config()
const app = express()
app.set("PORT", process.env.PORT || 8080)
sequelize.sync({force: false})
.then(() => {console.log('db Oneline')})
.catch((err) => {console.error(err)})
app.use(cors({origin: "*"}))
app.use(express.json())
app.use('/test',Test)
app.listen(app.get('PORT'), () => {
console.log(app.get("PORT"), 'Online')
})
In server.js, /test aims to receive this data from the front desk in a get request.
app.yaml
runtime: nodejs
env: flex
service: default
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
env_variables:
SQL_USER: "root"
SQL_PASSWORD: "root password entered when SQL was created"
SQL_DATABASE: "Instance ID"
INSTANCE_CONNECTION_NAME: "logintest-b314a:asia-northeast3:Instance ID"
beta_settings:
cloud_sql_instances: "logintest-b314a:asia-northeast3:Instance ID"
Error Messages from Error Reporting
SequelizeConnectionError: connect ETIMEDOUT
at ConnectionManager.connect (/app/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:102:17)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
parent: Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/app/node_modules/mysql2/lib/connection.js:189:17)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7) {

Related

Not established connection in TypeORM

I've got a problem with running migrations in TypeORM. Right now when I launch yarn typeorm migration:run, he throw me this error:
CannotExecuteNotConnectedError: Cannot execute operation on "default" connection because connection is not yet established.
Can someone tell me what it mean?
Here is also my configuration:
ormconfig.ts
let connectionSource;
if (!process.env.PG_HOST) {
connectionSource = new DataSource({
type: "sqlite",
database: "sqlite_db",
entities: [join(__dirname, "./lib/db/entities/*.entity.{js,ts}")],
});
} else {
connectionSource = new DataSource({
type: "postgres",
host: process.env.PG_HOST,
port: +process.env.PG_PORT,
username: process.env.PG_USER,
database: process.env.PG_DB_NAME,
password: process.env.PG_PASS,
entities: [join(__dirname, "./lib/db/entities/*.entity.{js,ts}")],
migrations: [join(__dirname, "./lib/db/migrations/*.{js,ts}")],
migrationsTransactionMode: "each",
migrationsRun: true,
synchronize: true,
logging: false
});
}
export default connectionSource as DataSource;
script:
"typeorm": "ts-node ./node_modules/typeorm/cli.js -d ormconfig.ts",
I read that it can be a problem with initialize connection before running migrations, but how do this with using NestJS?
Thanks for any help!

Apollo Federation cannot get connection running in docker-compose: Couldn't load service definitions for...reason: connect ECONNREFUSED 127.0.0.1:80

I am trying to run two applications by docker-compose without success, the federation cannot connect on service. Both works well out of docker.
Below follow the projects:
Apollo Server Federation in NodeJS
GraphQL API In Kotlin + Spring Boot + expediagroup:graphql-kotlin-spring-server
docker-compose.yml
version: '3'
services:
myservice:
image: 'myservice:0.0.1-SNAPSHOT'
container_name: myservice_container
ports:
- 8080:8080
expose:
- 8080
apollo_federation:
image: 'apollo-federation'
build: '.'
container_name: apollo_federation_container
restart: always
ports:
- 4000:4000
expose:
- 4000
environment:
ENDPOINT: "http://myservice/graphql"
depends_on:
- myservice
I already try a lot of combinations in my endpoint ex: http://myservice:8080/graphql, http://localhost:8080/graphql, http://myservice, etc...
index.js from Apollo Project
const { ApolloServer } = require("apollo-server");
const { ApolloGateway } = require("#apollo/gateway");
const gateway = new ApolloGateway({
serviceList: [
{ name: "Service1", url: process.env.ENDPOINT || 'http://localhost:8080/graphql' },
]
});
const server = new ApolloServer({
gateway,
subscriptions: false
});
server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`);
})
Error Log
Error checking for changes to service definitions: Couldn't load service definitions for "Service1" at http://myservice/graphql: request to http://myservice/graphql failed, reason: connect ECONNREFUSED 172.18.0.3:80
if I try to test from browse I get a error 500 by graphiql.
I am already tried to use a nginx as reverse-proxy, but no success
I am using the last libs in projects.
Thanks
In your code, you are using
{ name: "Service1", url: process.env.ENDPOINT || 'http://localhost:8080/graphql' },
which is pulling process.env.ENDPOINT, which is defined in your docker-compose file as using port 80:
environment:
ENDPOINT: "http://myservice/graphql" # This is port 80

Redis Enterprise Clustering Command Error 'CLUSTER'

We just installed Redis Enterprise and set some configuration on the database.
We create a simple script becase on our app the cluster command doesn't work, and that's correct it doesn't work:
var RedisClustr = require('redis-clustr');
var redis = new RedisClustr({
servers: [
{
host: 'URL',
port: 18611
}
],
redisOptions: {
password: 'ourpassword'
}
});
redis.get('KSHJDK', function(err, res) {
console.log(res, err);
});
Error on the shell:
undefined Error: couldn't get slot allocation'
at tryClient (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/src/RedisClustr.js:194:17)
at /Users/machine/Sites/redis-testing/node_modules/redis-clustr/src/RedisClustr.js:205:16
at Object.callbackOrEmit [as callback_or_emit] (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis/lib/utils.js:89:9)
at RedisClient.return_error (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis/index.js:706:11)
at JavascriptRedisParser.returnError (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis/index.js:196:18)
at JavascriptRedisParser.execute (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis-parser/lib/parser.js:572:12)
at Socket.<anonymous> (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis/index.js:274:27)
at Socket.emit (events.js:321:20)
at addChunk (_stream_readable.js:297:12)
at readableAddChunk (_stream_readable.js:273:9) {
errors: [
ReplyError: ERR command is not allowed
at parseError (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis-parser/lib/parser.js:193:12)
at parseType (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis-parser/lib/parser.js:303:14) {
command: 'CLUSTER',
args: [Array],
code: 'ERR'
}
]
}
Are we missing something on the configuration?
We don't know if its an error con the clustering or on Redis Enterprise.
Redis Enterprise supports two clustering flavors.
With regular OSS cluster you need a cluster aware client like the one you use.
The one you are using is for non-cluster aware clients, you should use it with regular client (as if you are connecting to a single Redis process).

How can I access localStorage of Cordova App via WebdriverIO and Appium?

I'm currently trying to write some automated tests for our cordova app written in Angular.
My current setup is the following:
Versions:
appium: 1.7.2
wdio-appium-service: 0.2.3
webdriverio: 4.11.0
wdio.conf.js
exports.config = {
port: 4723,
logLevel: 'error',
capabilities: [{
platformName: 'Android',
platformVersion: '8.1',
deviceName: 'any',
app: '../cordova_app/platforms/android/app/build/outputs/apk/debug/app-debug.apk',
autoWebview: true,
autoGrantPermissions: true
}],
// specs: ['./tests/spec/**/*.js'],
specs: ['./tests/spec/login.js'],
services: ['appium'],
reporters: ['spec'],
framework: 'jasmine',
jasmineNodeOpts: {
defaultTimeoutInterval: 90000
}
}
tests/spec/login.js
describe('Language and market choosing process', () => {
beforeEach(() => {
browser.timeouts('implicit', 2000);
});
afterEach(() => {
browser.reload();
});
it('should go through login process', () => {
const selectCountryBtn = $('.fsr-login__market-chooser');
selectCountryBtn.click();
// everything works so far
browser.localStorage('POST', {key: 'test', value: 'test123'});
// Failed: unknown error: call function result missing 'value'
});
});
When I run this test on my Android 8.1 emulator, the test crashes as soon as it reaches the localstorage part with the error:
Failed: unknown error: call function result missing "value"
Error: An unknown server-side error occurred while processing the command.
at localStorage("POST", [object Object]) - index.js:316:3
The localStorage API of WebdriverIO is described here
What am I doing wrong?
I agree that localStorage manipulation is a tricky endeavour to tackle, especially cross-browser, cross-platform, etc. When dealing with application cookies, or local storage, I default to using plain JS commands to achieve my goal.
As such, I would recommend you try the browser.execute() command to manipulate the browser's local storage:
browser.execute("localStorage.setItem('socialMediaRuinsTheWorld', true)");
or
browser.execute((keyName, keyValue) => {
localStorage.setItem(keyName, keyValue);
}, "testing", "theLocalStorage");
Outcome:
Appium API doesn't offer function localStorage.
I think this is your problem. Also if you use 3.4 version, check Appium section, not only Protocol. Native apps don't have same localStorage as browser and you can't access to it easily.

Browser reloaded before server with grunt-express-server and grunt-contrib-watch

I am trying to use grunt-contrib-watch together with grunt-express-server to reload my express server and the browser page whenever I made changes to the javascript files. The problem I am having is that the page reloads before the server is ready, so I get a "can't establish a connection to the server at localhost:3000."
Here is my Gruntfile.js:
module.exports = function(grunt) {
'use strict';
grunt.initConfig({
express: {
dev: {
options: {
script: 'gui-resources/scripts/js/server.js'
}
}
},
watch: {
express: {
files: ['gui-resources/scripts/js/**/*.js'],
tasks: ['express:dev'],
options: {
livereload: true,
spawn: false
}
}
}
});
// Load all grunt tasks declared in package.json
require('load-grunt-tasks')(grunt);
grunt.registerTask('default', ['express:dev', 'watch'])
};
In my server.js file I start the server with:
var port = 3000;
app.listen(port, function() {
console.log('Listening on port %d', port);
});
I found this similar question, but the solution proposed there doesn't apply on my case, since I am logging some output when the server is started, but the race condition appears anyway.
Update:
If I remove 'spawn: false' from watch:express config, everything works but express logs an error when started:
Error: listen EADDRINUSE
at errnoException (net.js:878:11)
at Server._listen2 (net.js:1016:14)
at listen (net.js:1038:10)
at Server.listen (net.js:1104:5)
at Function.app.listen (/Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/node_modules/express/lib/application.js:533:24)
at /Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/gui-resources/scripts/js/server.js:86:13
at Object.context.execCb (/Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/node_modules/requirejs/bin/r.js:1890:33)
at Object.Module.check (/Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/node_modules/requirejs/bin/r.js:1106:51)
at Object.<anonymous> (/Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/node_modules/requirejs/bin/r.js:1353:34)
at /Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/node_modules/requirejs/bin/r.js:372:23
Strange enough, in spite of the error the server and the page reload correctly.
Here is my code (the real Gruntfile is bigger, but I removed the parts not related to watch or express to make the question more readable).
I think you should be able to use the debounceDelay option with livereload to wait a bit longer until your server is ready:
watch: {
express: {
files: ['gui-resources/scripts/js/**/*.js'],
tasks: ['express:dev'],
options: {
livereload: true,
spawn: false,
debounceDelay: 1000 // in milliseconds
}
}
}