I've got a problem with running migrations in TypeORM. Right now when I launch yarn typeorm migration:run, he throw me this error:
CannotExecuteNotConnectedError: Cannot execute operation on "default" connection because connection is not yet established.
Can someone tell me what it mean?
Here is also my configuration:
ormconfig.ts
let connectionSource;
if (!process.env.PG_HOST) {
connectionSource = new DataSource({
type: "sqlite",
database: "sqlite_db",
entities: [join(__dirname, "./lib/db/entities/*.entity.{js,ts}")],
});
} else {
connectionSource = new DataSource({
type: "postgres",
host: process.env.PG_HOST,
port: +process.env.PG_PORT,
username: process.env.PG_USER,
database: process.env.PG_DB_NAME,
password: process.env.PG_PASS,
entities: [join(__dirname, "./lib/db/entities/*.entity.{js,ts}")],
migrations: [join(__dirname, "./lib/db/migrations/*.{js,ts}")],
migrationsTransactionMode: "each",
migrationsRun: true,
synchronize: true,
logging: false
});
}
export default connectionSource as DataSource;
script:
"typeorm": "ts-node ./node_modules/typeorm/cli.js -d ormconfig.ts",
I read that it can be a problem with initialize connection before running migrations, but how do this with using NestJS?
Thanks for any help!
Related
The server is intended to be deployed via GCP, React is deployed as a separate instance.
I use Sequelize, Node, Express, MySQL.
Cloud SQL confirmed that Sequelize and DB were well linked through workbench.
I once distributed it through APP Engine, so I added Cloud SQL-related settings, and if there was no problem, I decided that it would be possible to call the desired data.
The deployment was successful through gcloud app deployment and tried to verify that the desired API was called well, but 500 errors were constantly occurring.
I don't know where to fix it, can you give me a tip?
config/config.js
require('dotenv').config();
module.exports = {
development: {
username: "root",
password: process.env.SEQUELIZE_PASSWORD,
database: "DBName",
host: "SQL Instance Public IP Address",
dialect: "mysql",
timezone: "+09:00"
},
test: {
username: "root",
password: null,
database: "database_test",
host: "127.0.0.1",
dialect: "mysql"
},
production: {
username: "root",
password: process.env.SEQUELIZE_PASSWORD,
database: "DBName",
host: "SQL Instance Public IP Address",
dialect: "mysql",
timezone: "+09:00",
logging: false
},
}
server.js
const express = require('express')
const cors = require('cors')
const dotenv = require('dotenv')
const Test = require('./Routes/Test')
const { sequelize } = require('./models')
dotenv.config()
const app = express()
app.set("PORT", process.env.PORT || 8080)
sequelize.sync({force: false})
.then(() => {console.log('db Oneline')})
.catch((err) => {console.error(err)})
app.use(cors({origin: "*"}))
app.use(express.json())
app.use('/test',Test)
app.listen(app.get('PORT'), () => {
console.log(app.get("PORT"), 'Online')
})
In server.js, /test aims to receive this data from the front desk in a get request.
app.yaml
runtime: nodejs
env: flex
service: default
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
env_variables:
SQL_USER: "root"
SQL_PASSWORD: "root password entered when SQL was created"
SQL_DATABASE: "Instance ID"
INSTANCE_CONNECTION_NAME: "logintest-b314a:asia-northeast3:Instance ID"
beta_settings:
cloud_sql_instances: "logintest-b314a:asia-northeast3:Instance ID"
Error Messages from Error Reporting
SequelizeConnectionError: connect ETIMEDOUT
at ConnectionManager.connect (/app/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:102:17)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
parent: Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/app/node_modules/mysql2/lib/connection.js:189:17)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7) {
I have Typeorm loaded asynchronously
app.module.ts:
TypeOrmModule.forRootAsync({ inject: [ConfigService], useFactory: getTypeormConfig}), typeorm.config.ts: export const getTypeormConfig = async (config: ConfigService) => ({ type: 'postgres' as const, host: config.get<string>('TYPEORM_HOST'), port: config.get<number>('TYPEORM_PORT'), password: config.get<string>('TYPEORM_PASSWORD'), ....
It seems that the typeORM documentation is outdated, as they say:: https://wanago.io/2022/07/25/api-nestjs-database-migrations-typeorm/
I'm trying to follow this example. Created a separate second configuration for CLI migration typeorm-migration.config.ts at the root of the project:
import { BusinessLog } from 'src/yandex-ndd-api-client/entity/business-log.entity';
import { ClientStatus } from 'src/yandex-ndd-api-client/entity/client-status.entity';
import { Item } from 'src/yandex-ndd-api-client/entity/item.entity'; ...
export default new DataSource({
type: 'postgres' as const,
host: 'postgres', port: 5432, username: 'postgres', database: 'childrensworld',
subscribers: [TruckSubscriber, RequestSubscriber],
entities: [ Request, Item, BusinessLog, Place, ClientStatus, ....
I also wrote in package.json:
"typeorm": "ts-node ./node_modules/typeorm/cli", "typeorm:run-migrations": "npm run typeorm migration:run -- -d ./typeorm-migration.config.ts", "typeorm:generate-migration": "npm run typeorm -- -d ./typeorm-migration.config.ts migration:generate ./migrations/test_migration_name", "typeorm:create-migration": "npm run typeorm -- migration:create ./migrations/test_migration_name", "typeorm:revert-migration": "npm run typeorm -- -d ./typeorm-migration.config.ts migration:revert"
launching npm run typeorm:generate-migration --name=CreatePost as in the example and get:
Error during migration run: Error: Unable to open file: "E:\Programming\Nodejs\..........\typeorm-migration.config.ts". Cannot find module 'src/yandex-ndd-api-client/entity/business-log.entity' Require stack: - E:\Programming\Nodejs\LandPro\Ńhildsworld\Projects\tests\test_migrationTypeOrm\typeorm-migration.config.ts
As if it cannot read entities from typeorm-migration.config.ts The example says nothing about this. Maybe this config is for CLI migration(typeorm-migration.config.ts)do you need to connect somewhere else?
This is our typeorm cmd. You might need the -r tsconfig-paths/register.
typeorm": "ts-node --transpile-only -r tsconfig-paths/register ./node_modules/typeorm/cli.js --dataSource src/typeorm/typeorm.config.ts",
When trying to autogenerate migrations I get the following error.
File must contain a TypeScript / JavaScript code and export a DataSource instance
This is the command that I am running:
typeorm migration:generate projects/core/migrations/user -d db_config.ts -o
And my db_config.ts file looks like this:
import { DataSource } from "typeorm";
const AppDataSource = new DataSource({
type: "postgres",
host: process.env.PGHOST,
port: 5432,
username: process.env.PGUSER,
password: process.env.PGPASSWORD,
database: process.env.PGDATABASE,
entities: ["./projects/**/entities/*.ts"],
migrations: ["./projects/**/migrations/**.js"],
synchronize: true,
logging: false,
});
export default AppDataSource
My current file structure looks like this:
back_end
-- projects
--- index.ts
--- db_config.ts
And my index.ts file looks like this:
import express from "express";
import { AppDataSource } from "./data-source";
import budget_app from "./projects/budget_app/routes";
export const app = express();
const port = 3000;
AppDataSource.initialize()
.then(() => {
console.log("Data Source has been initialized!");
})
.catch((err) => {
console.error("Error during Data Source initialization", err);
});
// export default AppDataSource;
app.get("/", (req, res) => {
res.send("Hello World!!!!");
});
app.use("/budget_app", budget_app);
app.listen(port, () => {
console.log(`Example app listening on port ${port}`);
});
I am also running this in a docker container along with my postgres database. I have confirmed that the connection works because if I do synchronize=true it will create the table just fine. I just can't create the migration.
So I'm confused and don't know where to go from here to fix the issue. Thanks for your help in advance!
I had trouble with migrations in typeorm, and finally found a solution that will work consistently.
For me, build and then using js datasource didn't work, So i provide my solution for those who steel have struggle with typeorm-migrations.
Here is my step by step solution:
create your datasource config in some file like datasource.config.ts,
mine is like this:
import * as mysqlDriver from 'mysql2';
import {DataSourceOptions} from 'typeorm';
import dotenv from 'dotenv';
dotenv.config();
export function getConfig() {
return {
driver: mysqlDriver,
type: 'mysql',
host: process.env.MYSQL_HOST,
port: parseInt(process.env.MYSQL_PORT, 10),
username: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DB,
synchronize: false,
migrations: [__dirname + '/../../typeorm-migrations/*.{ts,js}'],
entities: [__dirname + '/../**/entity/*.{ts,js}'],
} as DataSourceOptions;
}
create a file with name like migration.config.ts
the implementation is like this:
const datasource = new DataSource(getConfig()); // config is one that is defined in datasource.config.ts file
datasource.initialize();
export default datasource;
now you can define your migration commands in package.json file
"migration:up": "./node_modules/.bin/ts-node ./node_modules/.bin/typeorm migration:run -d config/migration.config.ts",
"migration:down": "./node_modules/.bin/ts-node ./node_modules/.bin/typeorm migration:revert -d config/migration.config.ts"
with running yarn run migration:up you will be able to run your defined migrations in typeorm-migrations folder
I was running into the same issues (typeorm 0.3.4). I was just using npx typeorm migration:show -d ./src/data-source.ts and getting the same error as above (File must contain a TypeScript / JavaScript code and export a DataSource instance), while generating the migration file itself worked somehow, but not running/showing the migrations themselves.
My datasource looks like this
export const AppDataSource = new DataSource({
type: 'postgres',
url: process.env.DATABASE_URL,
logging: true,
entities: ['dist/entities/*.js'],
migrations: ['dist/migrations/*.js'],
});
because my tsc output lives in /dist. So based on the comments above I started using the datasource file that was generated from TypeScript and the error message changed:
npx typeorm migration:run -d ./dist/appDataSource.js
CannotExecuteNotConnectedError: Cannot execute operation on "default" connection because connection is not yet established.
So I looked into the database logs and realized it wanted to connect to postgres with the standard unix user, it wasn't honoring the connection strings in the datasource code. I had to supply all environment variables to the command as well and it worked:
DATABASE_URL=postgresql://postgres:postgres#localhost:5432/tsgraphqlserver npx typeorm migration:run -d ./dist/appDataSource.js
I had the same issue when using a .env file (if you don't have a .env file, this answer probably is irrelevant for you).
It seems, that the CLI does not pick environment variables from dotenv, so you have to load them yourself. E.g., using dotenv library, put this on top of your data-source file:
import * as dotenv from 'dotenv';
dotenv.config();
// export const AppDataSource = new DataSource()...
Alternatively, provide real environment variables when running the script:
PGHOST=... PGUSER=... PGDATABASE=... PGPASSWORD=... typeorm migration:generate ...
I am actually running into the same issue.
I was able to resolve it by using *.js instead of *.ts
Please try something like this:
tsc && typeorm migration:generate -d db_config.ts projects/core/migrations/user
My tsconfig.json looks like this.
{
"compilerOptions": {
"target": "esnext",
"module": "CommonJS",
"moduleResolution": "node",
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"outDir": "./build",
"removeComments": false,
"resolveJsonModule": true,
"esModuleInterop": true,
}
}
I recommend you open an issue on the typeorm github repo, I think it might be a bug.
Add the following to package.json scripts section:
"typeorm": "typeorm-ts-node-commonjs",
"migration:run": "ts-node ./node_modules/typeorm/cli.js migration:run -d ./src/data-source.ts",
"schema:sync": "npm run typeorm schema:sync -- -d src/data-source.ts",
"migration:show": "npm run typeorm migration:show -- -d src/data-source.ts",
"migration:generate": "npm run typeorm migration:generate -- -d src/data-source.ts",
"migration:create": "npm run typeorm migration:create"
You can then use npm run migration:create -- src/migration for example
I am having problems with running my app under windows. Normally, I am developing on Macbook but temporarly I had to switch. The thing is, that the app was already working on windows without problems. Here is an error message:
error: A hook (orm) failed to load!
verbose: Lowering sails...
verbose: Sent kill signal to child process (8684)...
verbose: Shutting down HTTP server...
verbose: HTTP server shut down successfully.
error: TypeError: Cannot read property 'config' of undefined
at validateModelDef (C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-orm\lib
\validate-model-def.js:109:84)
at C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-orm\lib\initialize.js:218
:36
at arrayEach (C:\projects\elearning-builder\node_modules\sails\node_modules\lodash\index.js:1289:13)
at Function. (C:\projects\elearning-builder\node_modules\sails\node_modules\lodash\index.j
s:3345:13)
at Array.async.auto._normalizeModelDefs (C:\projects\elearning-builder\node_modules\sails\node_module
s\sails-hook-orm\lib\initialize.js:216:11)
at listener (C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-orm\node_module
s\async\lib\async.js:605:42)
at C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-orm\node_modules\async\li
b\async.js:544:17
at _arrayEach (C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-orm\node_modu
les\async\lib\async.js:85:13)
at Immediate.taskComplete (C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-o
rm\node_modules\async\lib\async.js:543:13)
at processImmediate [as _immediateCallback] (timers.js:383:17)
PS C:\projects\elearning-builder>
I tried to check it out, what exactly is happening in \node_modules\sails\node_modules\sails-hook-orm\lib\validate-model-def.js:109:84
so I added simple console.log temporarly:
console.log("error in line below", hook);
var normalizedDatastoreConfig = hook.datastores[normalizedModelDef.connection[0]].config;
And as a result I see:
error in line below Hook {
load: [Function: wrapper],
defaults:
{ globals: { adapters: true, models: true },
orm: { skipProductionWarnings: false, moduleDefinitions: [Object] },
models: { connection: 'localDiskDb' },
connections: { localDiskDb: [Object] } },
configure: [Function: wrapper],
loadModules: [Function: wrapper],
initialize: [Function: wrapper],
config: { envs: [] },
middleware: {},
routes: { before: {}, after: {} },
reload: [Function: wrapper],
teardown: [Function: wrapper],
identity: 'orm',
configKey: 'orm',
models:
{ /* models here, I removed this as it was too long /*},
adapters: {},
datastores: {} }
So, the normalizedModelDef.connection[0] has value development. But hook.datastores is empty? That is why there is no config property.
But the thing is, I do have connections in my config/connections.js
Like here:
development: {
module : 'sails-mysql',
host : 'localhost',
port : 3306,
user : 'ebuilder',
password : 'ebuilder',
database : 'ebuilder'
},
production: {
/* details hidden ;) */
},
testing: {
/* details hidden ;) */
}
Any suggestions/tips highly appreciated.
You have some connections defined, but do you have the default connection defined that might be specified in config/models.js? If for example you have:
module.exports.models = {
connection: 'mysql',
...
then 'mysql' needs to be defined in your connections.js
As I see in your config/connections.js
development: {
module : 'sails-mysql',
host : 'localhost',
port : 3306,
user : 'ebuilder',
password : 'ebuilder',
database : 'ebuilder'
},
You have given module : 'sails-mysql which is not correct. It should be adapter:'sails-mysql'
development: {
adapter : 'sails-mysql',
host : 'localhost',
port : 3306,
user : 'ebuilder',
password : 'ebuilder',
database : 'ebuilder'
},
check your controller or models contains any error code. like any symbol. i had face same problem while my controller contain any character before or after api started
related
I was able to set up a Grunt task to SFTP files up to my dev server using grunt-ssh:
sftp: {
dev: {
files: {
'./': ['**','!{node_modules,artifacts,sql,logs}/**'],
},
options: {
path: '/path/to/project',
privateKey: grunt.file.read(process.env.HOME+'/.ssh/id_rsa'),
host: '111.111.111.111',
port: 22,
username: 'marksthebest',
}
}
},
But this uploads everything when I run it. There are thousands of files. I don't have time to wait for them to upload one-by-one every time I modify a file.
How can I set up a watch to upload only the files I've changed, as soon as I've changed them?
(For the curious, the server is a VM on the local network. It runs on a different OS and the setup is more similar to production than my local machine. Uploads should be lightning quick if I can get this working correctly)
What you need is grunt-newer, a task designed especially to update the configuration of any task depending on what file just changed, then run it. An example configuration could look like the following:
watch: {
all: {
files: ['**','!{node_modules,artifacts,sql,logs}/**'],
tasks: ['newer:sftp:dev']
}
}
You can do that using the watch event of grunt-contrib-watch.
You basically need to handle the watch event, modify the sftp files config to only include the changed files, and then let grunt run the sftp task.
Something like this:
module.exports = function(grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
secret: grunt.file.readJSON('secret.json'),
watch: {
test: {
files: 'files/**/*',
tasks: 'sftp',
options: {
spawn: false
}
}
},
sftp: {
test: {
files: {
"./": "files/**/*"
},
options: {
path: '/path/on/the/server/',
srcBasePath: 'files/',
host: 'hostname.com',
username: '<%= secret.username %>',
password: '<%= secret.password %>',
showProgress: true
}
}
}
}); // end grunt.initConfig
// on watch events configure sftp.test.files to only run on changed file
grunt.event.on('watch', function(action, filepath) {
grunt.config('sftp.test.files', {"./": filepath});
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-ssh');
};
Note the "spawn: false" option, and the way you need to set the config inside the event handler.
Note2: this code will upload one file at a time, there's a more robust method in the same link.
You can achieve that with Grunt:
grunt-contrib-watch
grunt-rsync
First things first: I am using a Docker Container. I also added a public SSH key into my Docker Container. So I am uploading into my "remote" container only the files that have changed in my local environment with this Grunt Task:
'use strict';
module.exports = function(grunt) {
grunt.initConfig({
rsync: {
options: {
args: ['-avz', '--verbose', '--delete'],
exclude: ['.git*', 'cache', 'log'],
recursive: true
},
development: {
options: {
src: './',
dest: '/var/www/development',
host: 'root#www.localhost.com',
port: 2222
}
}
},
sshexec: {
development: {
command: 'chown -R www-data:www-data /var/www/development',
options: {
host: 'www.localhost.com',
username: 'root',
port: 2222,
privateKey: grunt.file.read("/Users/YOUR_USER/.ssh/id_containers_rsa")
}
}
},
watch: {
development: {
files: [
'node_modules',
'package.json',
'Gruntfile.js',
'.gitignore',
'.htaccess',
'README.md',
'config/*',
'modules/*',
'themes/*',
'!cache/*',
'!log/*'
],
tasks: ['rsync:development', 'sshexec:development']
}
},
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-rsync');
grunt.loadNpmTasks('grunt-ssh');
grunt.registerTask('default', ['watch:development']);
};
Good Luck and Happy Hacking!
I have recently ran into a similar issue where I wanted to only upload files that have changed. I'm only using grunt-exec. Providing you have ssh access to your server, you can do this task with much greater efficiency. I also created an rsync.json that is ignored by git, so collaborators can have their own rsync data.
The benefit is that if anyone makes a change it automatically uploads to their stage.
// Watch - runs tasks when any changes are detected.
watch: {
scripts: {
files: '**/*',
tasks: ['deploy'],
options: {
spawn: false
}
}
}
My deploy task is a registered task that compiles scripts then runs exec:deploy
// Showing exec:deploy task
// Using rsync with ssh keys instead of login/pass
exec: {
deploy: {
cmd: 'rsync public_html/* <%= rsync.options %> <%= rsync.user %>#<%= rsync.host %>:<%=rsync.path %>'
}
You see a lot of the <%= rsync %> stuff? I use that to grab info from rysnc.json which is ingored by git. I only have this because this is a team workflow.
// rsync.json
{
"options": "-rvp --progress -a --delete -e 'ssh -q'",
"user": "mmcfarland",
"host": "example.com",
"path": "~/stage/public_html"
}
Make sure you rsync.json is defined in grunt:
module.exports = function(grunt) {
var rsync = grunt.file.readJSON('path/to/rsync.json');
var pkg = grunt.file.readJSON('path/to/package.json');
grunt.initConfig({
pkg: pkg,
rsync: rsync,
I think it's not good idea to upload everything that changed at once to staging server. And working on the staging server is not a good idea too. You have to configure your local machine server, to be the same as staging/production
It's better to upload 1 time, when you do deployment.
You can archive all the files using grunt-contrib-compress. And push them using grunt-ssh as 1 file, then extract it on the server, that will be much faster.
that's example of compress task:
compress: {
main: {
options:{
archive:'build/build.tar.gz',
mode: 'tgz'
},
files: [
{cwd: 'build/', src: ['sites/all/modules/**'], dest:'./'},
{cwd: 'build/', src: ['sites/all/themes/**'], dest:'./'},
{cwd: 'build/', src: ['sites/default/files/**'], dest:'./'}
]
}
}
PS: Didn't ever look to rsync grunt modules.
I understand that it's might not what you are looking for. But i decided to create my answer as standalone answer.