I'm learning Prisma ORM from video tutorials and official docs. They are explain and write All model code in one file called schema.prisma. It's ok but, when application grow it became messy. So, how should I separate my Model definition into separate file?
At this point in time Prisma doesn't support file segmentation. I can recommend 3 solutions though.
Option 1: Prismix
Prismix utilizes models and enums to create relations across files for your Prisma schema via a prismix configuration file.
{
"mixers": [
{
"input": [
"base.prisma",
"./modules/auth/auth.prisma",
"./modules/posts/posts.prisma",
],
"output": "prisma/schema.prisma"
}
]
}
Placing this inside of a prismix.config.json file which will define how you'd like to merge your Prisma segmentations.
Option 2: Schemix
Schemix Utilizes Typescript configurations to handle schema segmenting.
For example:
// _schema.ts
import { createSchema } from "schemix";
export const PrismaSchema = createSchema({
datasource: {
provider: "postgresql",
url: {
env: "DATABASE_URL"
},
},
generator: {
provider: "prisma-client-js",
},
});
export const UserModel = PrismaSchema.createModel("User");
import "./models/User.model";
PrismaSchema.export("./", "schema");
Inside of User.model
// models/User.model.ts
import { UserModel, PostModel, PostTypeEnum } from "../_schema";
UserModel
.string("id", { id: true, default: { uuid: true } })
.int("registrantNumber", { default: { autoincrement: true } })
.boolean("isBanned", { default: false })
.relation("posts", PostModel, { list: true })
.raw('##map("service_user")');
This will then generate your prisma/schema.prisma containing your full schema. I used only one database as an example (taken from documentation) but you should get the point.
Option 3: Cat -> Generate
Segmenting your schema into chunk part filenames and run:
cat *.part.prisma > schema.prisma
yarn prisma generate
Most of these if not all of them are referenced here in the currently Open issue regarding support for Prisma schema file segmentation https://github.com/prisma/prisma/issues/2377
This is not yet possible with Prisma. See this outstanding issue for possible workarounds https://github.com/prisma/prisma/issues/2377.
There is a library called Prismix that allows you to write as many schema files as you'd like, here you go the link
I have an app in progress which requires 3 micro code bases and some iframe postMessage stuff.
Controlling the build of all these i aim to manage from a single rollup config file and injecting arguments in via cli:
"scripts": {
"dev:child": "rollup -c rollup.config.js -w --for_type child",
"dev:parent": "rollup -c rollup.config.js -w --for_type parent",
"dev:sidebar": "rollup -c rollup.config.js -w --for_type sidebar",
My rollup config works but the issue is that rollup complains that for_type is a non recognised argument.
What is the proper way of injecting custom arguments to rollup?
I cannot see anything here: https://rollupjs.org/guide/en/#configuration-files
When you export a function instead of an object in rollup.config.js, the first argument of the function will be a CLI arguments object. For example, if you config is:
export default cliArgs => {
let build = {
input: 'src/index.js',
output: [/* ... */],/
plugins: [/* ... */]
};
switch (cliArgs.for_type) {
case 'child':
// Customise build for child
break;
case 'parent':
// Customise build for parent
break;
case 'sidebar':
// Customise build for sidebar
break;
default:
// No CLI argument
break;
}
return build
};
Have another look at the documentation for more info.
From https://rollupjs.org/guide/en/#configuration-files
You can even define your own command line options if you prefix them with config:
export default commandLineArgs => {
if (commandLineArgs.configDebug === true) {
return debugConfig;
}
return defaultConfig;
};
I'm creating a custom linting rule in lesshint.
I want to access the filename of the file being walked.
Current code:
module.exports = {
name: 'customrule',
nodeTypes: ['decl'],
lint: function(config, node) {
console.log(node.root().source.input.from);
}
};
The closest I've got is node.root().source.input.from which seems to output the index of the file, but not its name.
config object seems to be a boolean
Lead maintainer of lesshint here.
I've just pushed an update where the full file path will be included on all nodes. You can access it via the source property, like this:
module.exports = {
name: 'customrule',
nodeTypes: ['decl'],
lint: function(config, node) {
console.log(node.source.input.file)
}
};
The value of config will be the value you specified for it in your .lesshintrc file. For example:
{
"customrule": {
"enabled": true,
"option": false
}
}
Will pass that object in config.
Link to the newly released version: https://github.com/lesshint/lesshint/releases/tag/v4.6.2
I've set Grunt to compile compass with watch task and now I'd like to copy useful files from bower_components/... - e.g. bower_components/jquery/jquery.min.js, because bower produces a lot of unnecessary files, which I want to get rid of, when uploading to server.
CMD produces warning and stops process;
Reading C:\Users\sjiamnocna\Documents\NetBeansProjects\PM_new\node_modules\grunt-bowercopy\package.json...OK
Parsing C:\Users\sjiamnocna\Documents\NetBeansProjects\PM_new\node_modules\grunt-bowercopy\package.json...OK
Reading bower.json...OK
Parsing bower.json...OK
Loading "bowercopy.js" tasks...OK
+ bowercopy
Loading "gruntfile.js" tasks...OK
+ default, dog
No tasks specified, running default tasks.
Running tasks: default
Running "default" task
Running "bowercopy" task
Running "bowercopy:copythat" (bowercopy) task
Verifying property bowercopy.copythat exists in config...OK
File: [no files]
Options: srcPrefix="bower_components", destPrefix="files", ignore=[], report, runBower=false, clean=false, copyOptions={}
Using srcPrefix: bower_components
Using destPrefix: files
Warning: Nothing was copied for the "copythat" target Use --force to continue.
My gruntfile:
module.exports = function (grunt) {
grunt.initConfig({
watch: {
scss: {
files: ['files/style/sass/*.scss'],
tasks: ['compass']
}
},
compass: {
dist: {
options: {
sassDir: 'files/style/sass',
cssDir: 'files/style',
environment: 'development'
}
}
},
bowercopy: {
copythat: {
options: {
runBower: false,
srcPrefix: 'bower_components',
destPrefix: 'files'
},
script: {
'jquery/dist/jquery.min.js': 'script/lib/jquery.min.js',
'jquery-ui/jquery-ui.min.js': 'script/lib/jquery-ui.min.js',
'masonry/dist/masonry.pkgd.min.js': 'script/lib/masonry.pkgd.min.js',
'sweetalert/dist/sweetalert.min.js': 'script/lib/sweetalert.min.js'
}
}
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-contrib-compass');
grunt.loadNpmTasks('grunt-bowercopy');
grunt.registerTask('default', ['bowercopy']);
grunt.registerTask('dog', ['watch']);
};
Can anyone tell me what's wrong? Or, is there any other way to do it with grunt (automatically :) )?
Thanks #cartant for comment, it was one of the mistakes I've made - using whatever instead of "files"
I've changed position of resource and target
Wrong:
'source':'target'
improved:
'target':'source'
Works!
related
I was able to set up a Grunt task to SFTP files up to my dev server using grunt-ssh:
sftp: {
dev: {
files: {
'./': ['**','!{node_modules,artifacts,sql,logs}/**'],
},
options: {
path: '/path/to/project',
privateKey: grunt.file.read(process.env.HOME+'/.ssh/id_rsa'),
host: '111.111.111.111',
port: 22,
username: 'marksthebest',
}
}
},
But this uploads everything when I run it. There are thousands of files. I don't have time to wait for them to upload one-by-one every time I modify a file.
How can I set up a watch to upload only the files I've changed, as soon as I've changed them?
(For the curious, the server is a VM on the local network. It runs on a different OS and the setup is more similar to production than my local machine. Uploads should be lightning quick if I can get this working correctly)
What you need is grunt-newer, a task designed especially to update the configuration of any task depending on what file just changed, then run it. An example configuration could look like the following:
watch: {
all: {
files: ['**','!{node_modules,artifacts,sql,logs}/**'],
tasks: ['newer:sftp:dev']
}
}
You can do that using the watch event of grunt-contrib-watch.
You basically need to handle the watch event, modify the sftp files config to only include the changed files, and then let grunt run the sftp task.
Something like this:
module.exports = function(grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
secret: grunt.file.readJSON('secret.json'),
watch: {
test: {
files: 'files/**/*',
tasks: 'sftp',
options: {
spawn: false
}
}
},
sftp: {
test: {
files: {
"./": "files/**/*"
},
options: {
path: '/path/on/the/server/',
srcBasePath: 'files/',
host: 'hostname.com',
username: '<%= secret.username %>',
password: '<%= secret.password %>',
showProgress: true
}
}
}
}); // end grunt.initConfig
// on watch events configure sftp.test.files to only run on changed file
grunt.event.on('watch', function(action, filepath) {
grunt.config('sftp.test.files', {"./": filepath});
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-ssh');
};
Note the "spawn: false" option, and the way you need to set the config inside the event handler.
Note2: this code will upload one file at a time, there's a more robust method in the same link.
You can achieve that with Grunt:
grunt-contrib-watch
grunt-rsync
First things first: I am using a Docker Container. I also added a public SSH key into my Docker Container. So I am uploading into my "remote" container only the files that have changed in my local environment with this Grunt Task:
'use strict';
module.exports = function(grunt) {
grunt.initConfig({
rsync: {
options: {
args: ['-avz', '--verbose', '--delete'],
exclude: ['.git*', 'cache', 'log'],
recursive: true
},
development: {
options: {
src: './',
dest: '/var/www/development',
host: 'root#www.localhost.com',
port: 2222
}
}
},
sshexec: {
development: {
command: 'chown -R www-data:www-data /var/www/development',
options: {
host: 'www.localhost.com',
username: 'root',
port: 2222,
privateKey: grunt.file.read("/Users/YOUR_USER/.ssh/id_containers_rsa")
}
}
},
watch: {
development: {
files: [
'node_modules',
'package.json',
'Gruntfile.js',
'.gitignore',
'.htaccess',
'README.md',
'config/*',
'modules/*',
'themes/*',
'!cache/*',
'!log/*'
],
tasks: ['rsync:development', 'sshexec:development']
}
},
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-rsync');
grunt.loadNpmTasks('grunt-ssh');
grunt.registerTask('default', ['watch:development']);
};
Good Luck and Happy Hacking!
I have recently ran into a similar issue where I wanted to only upload files that have changed. I'm only using grunt-exec. Providing you have ssh access to your server, you can do this task with much greater efficiency. I also created an rsync.json that is ignored by git, so collaborators can have their own rsync data.
The benefit is that if anyone makes a change it automatically uploads to their stage.
// Watch - runs tasks when any changes are detected.
watch: {
scripts: {
files: '**/*',
tasks: ['deploy'],
options: {
spawn: false
}
}
}
My deploy task is a registered task that compiles scripts then runs exec:deploy
// Showing exec:deploy task
// Using rsync with ssh keys instead of login/pass
exec: {
deploy: {
cmd: 'rsync public_html/* <%= rsync.options %> <%= rsync.user %>#<%= rsync.host %>:<%=rsync.path %>'
}
You see a lot of the <%= rsync %> stuff? I use that to grab info from rysnc.json which is ingored by git. I only have this because this is a team workflow.
// rsync.json
{
"options": "-rvp --progress -a --delete -e 'ssh -q'",
"user": "mmcfarland",
"host": "example.com",
"path": "~/stage/public_html"
}
Make sure you rsync.json is defined in grunt:
module.exports = function(grunt) {
var rsync = grunt.file.readJSON('path/to/rsync.json');
var pkg = grunt.file.readJSON('path/to/package.json');
grunt.initConfig({
pkg: pkg,
rsync: rsync,
I think it's not good idea to upload everything that changed at once to staging server. And working on the staging server is not a good idea too. You have to configure your local machine server, to be the same as staging/production
It's better to upload 1 time, when you do deployment.
You can archive all the files using grunt-contrib-compress. And push them using grunt-ssh as 1 file, then extract it on the server, that will be much faster.
that's example of compress task:
compress: {
main: {
options:{
archive:'build/build.tar.gz',
mode: 'tgz'
},
files: [
{cwd: 'build/', src: ['sites/all/modules/**'], dest:'./'},
{cwd: 'build/', src: ['sites/all/themes/**'], dest:'./'},
{cwd: 'build/', src: ['sites/default/files/**'], dest:'./'}
]
}
}
PS: Didn't ever look to rsync grunt modules.
I understand that it's might not what you are looking for. But i decided to create my answer as standalone answer.