how to configure nest/Bull redis connection - redis

I am using BullModule in nest.js.
when I connect to a local redis it works:
const REDIS = {
host: 'localhost',
};
#Module({
imports: [
TaskTypesModule,
TasksModule,
ScheduleModule.forRoot(),
BullModule.forRoot({
// #ts-ignore
redis: REDIS,
}),
],
controllers: [AppController],
providers: [AppService, PrismaService],
})
export class AppModule {}
But when I connect to a remote system
const REDIS = {
host: process.env.REDIS_ENDPOINT,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASSWORD,
};
with env file
REDIS_USERNAME=default
REDIS_PASSWORD=p----------------------S
REDIS_ENDPOINT=redis-1xxxxx4.c261.us-east-1-4.ec2.cloud.redislabs.com
REDIS_PORT=1xxxxx4
it doesn't write to the redis queue; by way of comparison, I can connect via redisight:
redsight connection
So - bottom line - how to configure the redis node for a remote connection in Bull?

For what I have tested, bull settings never accept a redis port different than its default, 6379.

Maybe the env file hasn't been loaded yet. You can try adding import 'dotenv/config' at the top of this file.

Related

I used Google Cloud Platform App Engine and Google Cloud SQL, Sequelize, but Error: connect ETIMEDOUT error

The server is intended to be deployed via GCP, React is deployed as a separate instance.
I use Sequelize, Node, Express, MySQL.
Cloud SQL confirmed that Sequelize and DB were well linked through workbench.
I once distributed it through APP Engine, so I added Cloud SQL-related settings, and if there was no problem, I decided that it would be possible to call the desired data.
The deployment was successful through gcloud app deployment and tried to verify that the desired API was called well, but 500 errors were constantly occurring.
I don't know where to fix it, can you give me a tip?
config/config.js
require('dotenv').config();
module.exports = {
development: {
username: "root",
password: process.env.SEQUELIZE_PASSWORD,
database: "DBName",
host: "SQL Instance Public IP Address",
dialect: "mysql",
timezone: "+09:00"
},
test: {
username: "root",
password: null,
database: "database_test",
host: "127.0.0.1",
dialect: "mysql"
},
production: {
username: "root",
password: process.env.SEQUELIZE_PASSWORD,
database: "DBName",
host: "SQL Instance Public IP Address",
dialect: "mysql",
timezone: "+09:00",
logging: false
},
}
server.js
const express = require('express')
const cors = require('cors')
const dotenv = require('dotenv')
const Test = require('./Routes/Test')
const { sequelize } = require('./models')
dotenv.config()
const app = express()
app.set("PORT", process.env.PORT || 8080)
sequelize.sync({force: false})
.then(() => {console.log('db Oneline')})
.catch((err) => {console.error(err)})
app.use(cors({origin: "*"}))
app.use(express.json())
app.use('/test',Test)
app.listen(app.get('PORT'), () => {
console.log(app.get("PORT"), 'Online')
})
In server.js, /test aims to receive this data from the front desk in a get request.
app.yaml
runtime: nodejs
env: flex
service: default
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
env_variables:
SQL_USER: "root"
SQL_PASSWORD: "root password entered when SQL was created"
SQL_DATABASE: "Instance ID"
INSTANCE_CONNECTION_NAME: "logintest-b314a:asia-northeast3:Instance ID"
beta_settings:
cloud_sql_instances: "logintest-b314a:asia-northeast3:Instance ID"
Error Messages from Error Reporting
SequelizeConnectionError: connect ETIMEDOUT
at ConnectionManager.connect (/app/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:102:17)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
parent: Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/app/node_modules/mysql2/lib/connection.js:189:17)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7) {

Apollo Federation cannot get connection running in docker-compose: Couldn't load service definitions for...reason: connect ECONNREFUSED 127.0.0.1:80

I am trying to run two applications by docker-compose without success, the federation cannot connect on service. Both works well out of docker.
Below follow the projects:
Apollo Server Federation in NodeJS
GraphQL API In Kotlin + Spring Boot + expediagroup:graphql-kotlin-spring-server
docker-compose.yml
version: '3'
services:
myservice:
image: 'myservice:0.0.1-SNAPSHOT'
container_name: myservice_container
ports:
- 8080:8080
expose:
- 8080
apollo_federation:
image: 'apollo-federation'
build: '.'
container_name: apollo_federation_container
restart: always
ports:
- 4000:4000
expose:
- 4000
environment:
ENDPOINT: "http://myservice/graphql"
depends_on:
- myservice
I already try a lot of combinations in my endpoint ex: http://myservice:8080/graphql, http://localhost:8080/graphql, http://myservice, etc...
index.js from Apollo Project
const { ApolloServer } = require("apollo-server");
const { ApolloGateway } = require("#apollo/gateway");
const gateway = new ApolloGateway({
serviceList: [
{ name: "Service1", url: process.env.ENDPOINT || 'http://localhost:8080/graphql' },
]
});
const server = new ApolloServer({
gateway,
subscriptions: false
});
server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`);
})
Error Log
Error checking for changes to service definitions: Couldn't load service definitions for "Service1" at http://myservice/graphql: request to http://myservice/graphql failed, reason: connect ECONNREFUSED 172.18.0.3:80
if I try to test from browse I get a error 500 by graphiql.
I am already tried to use a nginx as reverse-proxy, but no success
I am using the last libs in projects.
Thanks
In your code, you are using
{ name: "Service1", url: process.env.ENDPOINT || 'http://localhost:8080/graphql' },
which is pulling process.env.ENDPOINT, which is defined in your docker-compose file as using port 80:
environment:
ENDPOINT: "http://myservice/graphql" # This is port 80

Redis Enterprise Clustering Command Error 'CLUSTER'

We just installed Redis Enterprise and set some configuration on the database.
We create a simple script becase on our app the cluster command doesn't work, and that's correct it doesn't work:
var RedisClustr = require('redis-clustr');
var redis = new RedisClustr({
servers: [
{
host: 'URL',
port: 18611
}
],
redisOptions: {
password: 'ourpassword'
}
});
redis.get('KSHJDK', function(err, res) {
console.log(res, err);
});
Error on the shell:
undefined Error: couldn't get slot allocation'
at tryClient (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/src/RedisClustr.js:194:17)
at /Users/machine/Sites/redis-testing/node_modules/redis-clustr/src/RedisClustr.js:205:16
at Object.callbackOrEmit [as callback_or_emit] (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis/lib/utils.js:89:9)
at RedisClient.return_error (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis/index.js:706:11)
at JavascriptRedisParser.returnError (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis/index.js:196:18)
at JavascriptRedisParser.execute (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis-parser/lib/parser.js:572:12)
at Socket.<anonymous> (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis/index.js:274:27)
at Socket.emit (events.js:321:20)
at addChunk (_stream_readable.js:297:12)
at readableAddChunk (_stream_readable.js:273:9) {
errors: [
ReplyError: ERR command is not allowed
at parseError (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis-parser/lib/parser.js:193:12)
at parseType (/Users/machine/Sites/redis-testing/node_modules/redis-clustr/node_modules/redis-parser/lib/parser.js:303:14) {
command: 'CLUSTER',
args: [Array],
code: 'ERR'
}
]
}
Are we missing something on the configuration?
We don't know if its an error con the clustering or on Redis Enterprise.
Redis Enterprise supports two clustering flavors.
With regular OSS cluster you need a cluster aware client like the one you use.
The one you are using is for non-cluster aware clients, you should use it with regular client (as if you are connecting to a single Redis process).

Browser reloaded before server with grunt-express-server and grunt-contrib-watch

I am trying to use grunt-contrib-watch together with grunt-express-server to reload my express server and the browser page whenever I made changes to the javascript files. The problem I am having is that the page reloads before the server is ready, so I get a "can't establish a connection to the server at localhost:3000."
Here is my Gruntfile.js:
module.exports = function(grunt) {
'use strict';
grunt.initConfig({
express: {
dev: {
options: {
script: 'gui-resources/scripts/js/server.js'
}
}
},
watch: {
express: {
files: ['gui-resources/scripts/js/**/*.js'],
tasks: ['express:dev'],
options: {
livereload: true,
spawn: false
}
}
}
});
// Load all grunt tasks declared in package.json
require('load-grunt-tasks')(grunt);
grunt.registerTask('default', ['express:dev', 'watch'])
};
In my server.js file I start the server with:
var port = 3000;
app.listen(port, function() {
console.log('Listening on port %d', port);
});
I found this similar question, but the solution proposed there doesn't apply on my case, since I am logging some output when the server is started, but the race condition appears anyway.
Update:
If I remove 'spawn: false' from watch:express config, everything works but express logs an error when started:
Error: listen EADDRINUSE
at errnoException (net.js:878:11)
at Server._listen2 (net.js:1016:14)
at listen (net.js:1038:10)
at Server.listen (net.js:1104:5)
at Function.app.listen (/Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/node_modules/express/lib/application.js:533:24)
at /Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/gui-resources/scripts/js/server.js:86:13
at Object.context.execCb (/Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/node_modules/requirejs/bin/r.js:1890:33)
at Object.Module.check (/Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/node_modules/requirejs/bin/r.js:1106:51)
at Object.<anonymous> (/Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/node_modules/requirejs/bin/r.js:1353:34)
at /Users/pat/projects/sourcefabric/plugin-liveblog-embed-server/node_modules/requirejs/bin/r.js:372:23
Strange enough, in spite of the error the server and the page reload correctly.
Here is my code (the real Gruntfile is bigger, but I removed the parts not related to watch or express to make the question more readable).
I think you should be able to use the debounceDelay option with livereload to wait a bit longer until your server is ready:
watch: {
express: {
files: ['gui-resources/scripts/js/**/*.js'],
tasks: ['express:dev'],
options: {
livereload: true,
spawn: false,
debounceDelay: 1000 // in milliseconds
}
}
}

Grunt watch: only upload files that have changed

related
I was able to set up a Grunt task to SFTP files up to my dev server using grunt-ssh:
sftp: {
dev: {
files: {
'./': ['**','!{node_modules,artifacts,sql,logs}/**'],
},
options: {
path: '/path/to/project',
privateKey: grunt.file.read(process.env.HOME+'/.ssh/id_rsa'),
host: '111.111.111.111',
port: 22,
username: 'marksthebest',
}
}
},
But this uploads everything when I run it. There are thousands of files. I don't have time to wait for them to upload one-by-one every time I modify a file.
How can I set up a watch to upload only the files I've changed, as soon as I've changed them?
(For the curious, the server is a VM on the local network. It runs on a different OS and the setup is more similar to production than my local machine. Uploads should be lightning quick if I can get this working correctly)
What you need is grunt-newer, a task designed especially to update the configuration of any task depending on what file just changed, then run it. An example configuration could look like the following:
watch: {
all: {
files: ['**','!{node_modules,artifacts,sql,logs}/**'],
tasks: ['newer:sftp:dev']
}
}
You can do that using the watch event of grunt-contrib-watch.
You basically need to handle the watch event, modify the sftp files config to only include the changed files, and then let grunt run the sftp task.
Something like this:
module.exports = function(grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
secret: grunt.file.readJSON('secret.json'),
watch: {
test: {
files: 'files/**/*',
tasks: 'sftp',
options: {
spawn: false
}
}
},
sftp: {
test: {
files: {
"./": "files/**/*"
},
options: {
path: '/path/on/the/server/',
srcBasePath: 'files/',
host: 'hostname.com',
username: '<%= secret.username %>',
password: '<%= secret.password %>',
showProgress: true
}
}
}
}); // end grunt.initConfig
// on watch events configure sftp.test.files to only run on changed file
grunt.event.on('watch', function(action, filepath) {
grunt.config('sftp.test.files', {"./": filepath});
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-ssh');
};
Note the "spawn: false" option, and the way you need to set the config inside the event handler.
Note2: this code will upload one file at a time, there's a more robust method in the same link.
You can achieve that with Grunt:
grunt-contrib-watch
grunt-rsync
First things first: I am using a Docker Container. I also added a public SSH key into my Docker Container. So I am uploading into my "remote" container only the files that have changed in my local environment with this Grunt Task:
'use strict';
module.exports = function(grunt) {
grunt.initConfig({
rsync: {
options: {
args: ['-avz', '--verbose', '--delete'],
exclude: ['.git*', 'cache', 'log'],
recursive: true
},
development: {
options: {
src: './',
dest: '/var/www/development',
host: 'root#www.localhost.com',
port: 2222
}
}
},
sshexec: {
development: {
command: 'chown -R www-data:www-data /var/www/development',
options: {
host: 'www.localhost.com',
username: 'root',
port: 2222,
privateKey: grunt.file.read("/Users/YOUR_USER/.ssh/id_containers_rsa")
}
}
},
watch: {
development: {
files: [
'node_modules',
'package.json',
'Gruntfile.js',
'.gitignore',
'.htaccess',
'README.md',
'config/*',
'modules/*',
'themes/*',
'!cache/*',
'!log/*'
],
tasks: ['rsync:development', 'sshexec:development']
}
},
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-rsync');
grunt.loadNpmTasks('grunt-ssh');
grunt.registerTask('default', ['watch:development']);
};
Good Luck and Happy Hacking!
I have recently ran into a similar issue where I wanted to only upload files that have changed. I'm only using grunt-exec. Providing you have ssh access to your server, you can do this task with much greater efficiency. I also created an rsync.json that is ignored by git, so collaborators can have their own rsync data.
The benefit is that if anyone makes a change it automatically uploads to their stage.
// Watch - runs tasks when any changes are detected.
watch: {
scripts: {
files: '**/*',
tasks: ['deploy'],
options: {
spawn: false
}
}
}
My deploy task is a registered task that compiles scripts then runs exec:deploy
// Showing exec:deploy task
// Using rsync with ssh keys instead of login/pass
exec: {
deploy: {
cmd: 'rsync public_html/* <%= rsync.options %> <%= rsync.user %>#<%= rsync.host %>:<%=rsync.path %>'
}
You see a lot of the <%= rsync %> stuff? I use that to grab info from rysnc.json which is ingored by git. I only have this because this is a team workflow.
// rsync.json
{
"options": "-rvp --progress -a --delete -e 'ssh -q'",
"user": "mmcfarland",
"host": "example.com",
"path": "~/stage/public_html"
}
Make sure you rsync.json is defined in grunt:
module.exports = function(grunt) {
var rsync = grunt.file.readJSON('path/to/rsync.json');
var pkg = grunt.file.readJSON('path/to/package.json');
grunt.initConfig({
pkg: pkg,
rsync: rsync,
I think it's not good idea to upload everything that changed at once to staging server. And working on the staging server is not a good idea too. You have to configure your local machine server, to be the same as staging/production
It's better to upload 1 time, when you do deployment.
You can archive all the files using grunt-contrib-compress. And push them using grunt-ssh as 1 file, then extract it on the server, that will be much faster.
that's example of compress task:
compress: {
main: {
options:{
archive:'build/build.tar.gz',
mode: 'tgz'
},
files: [
{cwd: 'build/', src: ['sites/all/modules/**'], dest:'./'},
{cwd: 'build/', src: ['sites/all/themes/**'], dest:'./'},
{cwd: 'build/', src: ['sites/default/files/**'], dest:'./'}
]
}
}
PS: Didn't ever look to rsync grunt modules.
I understand that it's might not what you are looking for. But i decided to create my answer as standalone answer.