Where the correct place should i define a port of server( express )? - express

I keep any hard coding information inside models/config.js, but i'm not sure that the models/config.js file is the correct place for a port.

Keep a ./config/my_database_config.js and put all there.
Similar for ./config/main_server_config.js
usually all other config files can also go there.
You can hardcode values in this my_database_config.js file . Or this file could suppose make a request to the server for the config file that returns a following json.
The config could be a json of type :
configJson = {
"env_production": {
"db_host_production": "www.host.production.url",
"db_password_production": "www.host.production.password"
},
"env_staging": {
"db_host_staging": "www.host.staging.url",
"db_password_staging": "www.host.staging.password"
},
"env_local": {
"db_host_local": "www.host.local.url",
"db_password_local": "www.host.local.password"
}
}
If its is just for local testing puposes you could even pass in config values as env variables to the json in config.js

Related

Laravel site cant be reached for this DotenvEditor

I am uses Dotenveditor to save the env parameters but after redirecting i faced error as
This site can’t be reachedThe connection was reset.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_RESET
what is mistake in my code? rest part of controller works properly.
if (isset($request->APP_DEBUG)) {
$env_update = DotenvEditor::setKeys(['APP_DEBUG' => 'true']);
} else {
$env_update = DotenvEditor::setKeys(['APP_DEBUG' => 'false']);
}
if (isset($request->COOKIE_CONSENT_ENABLED)) {
$env_update = DotenvEditor::setKeys(['COOKIE_CONSENT_ENABLED' => 'true']);
} else {
$env_update = DotenvEditor::setKeys(['COOKIE_CONSENT_ENABLED' => 'false']);
}
$env_update = DotenvEditor::setKeys([
'APP_NAME' => preg_replace('/\s+/', '', $request->title),
'APP_URL' => preg_replace('/\s+/', '', $request->APP_URL),
]);
$env_update->save();
Try to update your .env file using notepad++ as administrator. I Think it is much easier and user friendly. When you make the necessary changes save the file. Afterwords, I think you must reboot to the Virtual Machine (if you are using one) or restart the service in order the change takes effect to the application.
Talking about Laravel-Dotenv-Editor please try to visit Dotenv editor in order to find more information.
Example of a .env file:

Why is mysql connection not working with dotenv variables

Good evening,
I have been trying to connect to my database by using dotenv variables.
It worked perfectly before i used it.
I have 2files: 1 file mysqlConfig, where i put my settings for my database, and .env where i put my variables.
I don't see what i am doing wrong here:
SOCKET_PATH='/Applications/MAMP/tmp/mysql/mysql.sock'
USER='root'
PASSWORD='root'
HOST='localhost'
DATABASE='Groupomania'
MysqlConfig :
require('dotenv').config();
var mysql = require('mysql');
// Connexion à MYSQL
var bdd = mysql.createConnection({
socketPath : process.env.SOCKET_PATH,
user : process.env.USER,
password : process.env.PASSWORD,
host : process.env.HOST,
database : process.env.DATABASE
});
module.exports = bdd;
Thank you for your help
Have a nice evening
It might be because the dotenv package hasn't been able to find your .env file.
try printing any of the variables, and see if it's undefined. If it is, my assumption is correct.
console.log(process.env['SOCKET_PATH'])
In that case, you need to manually specify the relative path to your .env file. That can be done by passing an options object to your config function, containing the property path:
const path = require('path')
require('dotenv').config({
path: path.resolve(__dirname, '../../.env')
})
Ok, i just found the issue...
I used console log for every varibales, util i figured out that my USER was not the correct one... it was my last name (i don't know why ?)
I did not know the variable USER is kind of reserved ?
Anyway, this way it works...
SOCKET_PATH='/Applications/MAMP/tmp/mysql/mysql.sock'
USERDB='root'
PASSWORD='root'
HOST='127.0.0.1'
DATABASE='Groupomania'
mysqlConfig
require('dotenv').config();
var mysql = require('mysql');
// Connexion à MYSQL
var bdd = mysql.createConnection({
socketPath : process.env.SOCKET_PATH,
user : process.env.USERDB,
password : process.env.PASSWORD,
host : process.env.HOST,
database : process.env.DATABASE
});
module.exports = bdd;
Your answer helped a lot, thanks !!

Chalice on_s3_event trigger seems to not work

I have a Chalice app that reads config data from a file in an S3 bucket. The file can change from time to time, and I want the app to immediately use the updated values, so I am using the on_s3_event decorator to reload the config file.
My code looks something like this (stripped way down for clarity):
CONFIG = {}
app = Chalice(app_name='foo')
#app.on_s3_event(bucket=S3_BUCKET, events=['s3:ObjectCreated:*'],
prefix='foo/')
def event_handler(event):
_load_config()
def _load_config():
# fetch json file from S3 bucket
CONFIG['foo'] = some item from the json file...
CONFIG['bar'] = some other item from the json file...
_load_config()
#app.route('/')
def home():
# refer to CONFIG values here
My problem is that for a short while (maybe 5-10 minutes) after uploading a new version of the config file, the app still uses the old config values.
Am I doing this wrong? Should I not be depending on global state in a Lambda function at all?
So your design here is flawed.
When you create an S3 Event in chalice it will create a separate Lambda function for that event. the CONFIG variable would get updated in the running instance of that Lambda function and all new instances of the Lambda function. However any other Lambdas in your Chalice app that are already running would just continue on with their current settings until they were cleaned up and restarted.
If you cannot live with a config that is only changeable when you deploy your Lambda functions you could use redis or some other in memory cache/db.
You should be using the .config/config.json file to store your variables for your chalice application. Those variables are stored in the os library and can be called:
URL = os.environ['MYVAR']
Your config.json file might look like this:
{
"version": "2.0",
"app_name": "MyApp",
"manage_iam_role": false,
"iam_role_arn": "arn:aws:iam::************:role/Chalice",
"lambda_timeout": 300,
"stages": {
"development": {
"environment_variables": {
"MYVAR": "foo"
}
},
"production": {
"environment_variables": {
"MYVAR": "bar"
}
}
},
"lambda_memory_size": 2048
}

deepstream error listen EADDRINUSE 127.0.0.1:6020

i try to run my first deepstream.io server from this link but i get this error :
error:
CONNECTION_ERROR | Error: listen EADDRINUSE 127.0.0.1:3003
PLUGIN_ERROR | connectionEndpoint wasn't initialised in time
f:\try\deep\node_modules\deepstream.io\src\utils\dependency-
initialiser.js:96
throw error
^
Error: connectionEndpoint wasn't initialised in time
at DependencyInitialiser._onTimeout
(f:\try\deep\node_modules\deepstream.io\src\utils\dependency-
initialiser.js:94:17)
at ontimeout (timers.js:386:14)
at tryOnTimeout (timers.js:250:5)
at Timer.listOnTimeout (timers.js:214:5)
and this is my code:
const DeepStreamServer = require("deepstream.io")
const C = DeepStreamServer.constants;
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
server.start();
In deepstream 3.0 we released our HTTP endpoint, by default this runs alongside our websocket endpoint.
Because of this, passing the port option at the root level of the config no longer works (it overrides both the HTTP and websocket port options, as you can see in the screen capture provided, both endpoints are trying to start on the same port).
You can override each of these ports as follows:
const deepstream = require('deepstream.io')
const server = new deepstream({
connectionEndpoints: {
http: {
options: {
port: ...
}
},
websocket: {
options: {
port: ...
}
}
}
})
server.start()
Or you can define your config in a file and point to that while initialising deepstream[1].
[1] deepstream server configuration
One solution that i find is passing empty config object so inseted of :
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
i'm just using this :
const server = new DeepStreamServer({})
and now everything work's well.
All the bellow is for Version 4.2.2 (last version by now)
I was having the same Port in use or config file not found errors. And i was using typescript and i didn't pay attention too to the output dir and build (which can be a problem when one use typescript and build). I was able to run the server in the end. And i had a lot of analysis.
I checked up the code source and i have seen how the config is loaded
const SUPPORTED_EXTENSIONS = ['.yml', '.yaml', '.json', '.js']
const DEFAULT_CONFIG_DIRS = [
path.join('.', 'conf', 'config'), path.join('..', 'conf', 'config'),
'/etc/deepstream/config', '/usr/local/etc/deepstream/config',
'/usr/local/etc/deepstream/conf/config',
]
DEFAULT_CONFIG_DIRS.push(path.join(process.argv[1], '..', 'conf', 'config'))
DEFAULT_CONFIG_DIRS.push(path.join(process.argv[1], '..', '..', 'conf', 'config'))
Also i tested different things and all. Here what i came up with:
First of all if we don't precise any parameter in the constructor. A config from the default directories will get to load. If there isn't then the server fail to run.
And one of the places where we can put a config is ./conf in the same folder as the server node script.
Secondly we can precise a config as a string path (parameter in the constructor). config.yml or one of the supported extensions. That will allow the server to load the server config + the permission.yml and users.yml configs. Which too are supposed to be in the same folder. If not in the same folder there load will fail, and therefor the permission plugin will not load. And so does the users config. And no fall back to default will happen.
Thirdly the supported extensions for the config files are: yml, yaml, json, js.
In nodejs context. If nothing precised. There is no fallback to some default config. The config need to be provided in one of the default folders, or by precising a path to it. Or by passing a config object. And all the optional options will default to some values if not provided ( a bit bellow there is an example that can show that ). Know however that precising an end point is very important and required.
To precise the path, we need to precise the path to the config.yml file (the server config) [example: path.join(__dirname, './conf/config.yml')]. Then from the same dir permission.yml and users.yml will be retrieved (the extension can be any of the supported one). We can not precise a path to a directory, it will fail.
We can precise the path to permission config or user config separatly within config.yaml as shown bellow:
# Permissioning example with default values for config-based permissioning
permission:
type: config
options:
path: ./permissions.yml
maxRuleIterations: 3
cacheEvacuationInterval: 60000
Finally we can pass an object to configure the server, or by passing null as a parameter and use .set methods (i didn't test the second method). For configuring the server we need to follow the same structure as the yml file. With sometimes a bit different naming. The typescript declaration files or types show us the way. With an editor like vscode. Even if we are not using typescript we can keep get the auto completion and type definitions.
And the simplest for equivalent to the previous version is :
const webSocketServer = new Deepstream({
connectionEndpoints: [
{
type: 'ws-websocket',
options: {
port: 6020,
host: '127.0.0.1',
urlPath: '/deepstream'
}
}
]
});
webSocketServer.start();
the above is the new syntax and way.
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
^^^^^^^ is completely deprecated and not supported in version 4 (the doc is not updated).

syslog-ng issue in tagging to server

I installed syslog-ng by using "yum install syslog-ng" in both local machine and server end.
I am using an open source version of syslog-ng.
My need is to pass the log file name from client to server end . I explicitly set the .SDATA.file at 18372.4.name field on my
client side, as the name of the file is available in the $FILE_NAME macro. But ".SDATA.file at 18372.4.name" is empty
in server side. When I am using some static file name the log beings to work.
Below is my code I dont know where i am going wrong If you need more information I can provide you can anyone help me.
MY CLIENT END SYSLOG-NG CODE:
source s_application_logs {
file(
"/var/log/test.log"
flags(no-parse)
);
};
destination d_access_system {
syslog(
"52.38.34.160"
transport("tcp")
port(6514)
);
};
rewrite r_set_filename {
set(
"$FILE_NAME",
value(".SDATA.file at 18372.4.name")
);
};
rewrite r_rename_filename {
subst(
"/var/log/",
"",
value(".SDATA.file at 18372.4.name")
type("string")
flags("prefix")
);
};
log {
source(s_application_logs);
rewrite(r_set_filename);
rewrite(r_rename_filename);
destination(d_access_system);
};
MY SERVER END SYSLOG-NG CODE:
source s_server_end {
syslog(
port(6514)
max_connections(1000)
keep_hostname(yes)
);
};
destination d_log_files {
file(
"/var/log/test/${.SDATA.file at 18372.4.name}"
create_dirs(yes)
);
};
log {source(s_server_end);destination(d_log_files);};
The problem is that the $FILE_NAME macro is currently available only in the commercial version of syslog-ng. For a possible workaround, see this blogpost: Forwarding filenames with syslog-ng