Problem when connect sequelize on express with postgreSQL - express

I have a little problem with connecting sequelize in express
TypeError: sequelize.sync is not a function
I have database setup in docker
database:
image: postgres
container_name: database
restart: unless-stopped
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: express_database
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- "5432:5432"
And this is my sequelize define
import Sequelize from 'sequelize'
const sequelize = new Sequelize(
'express_database',
'postgres',
'secret',
{
host: 'database',
dialect: 'postgres'
}
);
export default { sequelize }
And in index.js
import sequelize from './Util/database.js'
(async () => {
try {
await sequelize.sync({
force: false
})
} catch (e) {
console.log(e)
}
})()
And my user model
import Sequelize from 'sequelize';
import db from '../Util/database';
const User = db.define('users', {
id: {
type: Sequelize.INTEGER,
autoIncrement: true,
allowNull: false,
primaryKey: true
},
first_name: {
type: Sequelize.STRING,
allowNull: false
}
});
export default { User }
I struggled all day and searched for information but I don't realize what the problem might be. Do you know how I can solve this whole thing?
Is it a problem that I wrote the code with es6?

export default { sequelize } exports an object with only 1 key named 'sequelize' and containing what you want.
Replace it with export default sequelize. Otherwise, you'd be forced to call sequelize.sequelize.sync() when you import it^^

Related

Getting Adapter not added. Use RxDB.plugin(require('pouchdb-adapter-[adaptername]'); in ReactNative

RxError: RxError:
RxDatabase.create(): Adapter not added. Use RxDB.plugin(require('pouchdb-adapter-[adaptername]');
Given parameters: {
adapter:"asyncstorage"}
database.js //mycode
import RxDB from 'rxdb';
import schema from './ramsSchema';
RxDB.plugin(require('pouchdb-adapter-asyncstorage').default);
RxDB.plugin(require('pouchdb-adapter-http'));
const syncURL = 'couchDB url'
//this function initializes the RxDB if DB already exists else creates a new one and returns the db instance
export async function initializeDB(dbName,password) {
const db = await RxDB.create({
name: dbName.toLowerCase(),
adapter: 'asyncstorage',
password:'rams#1234',
multiInstance: false,
ignoreDuplicate: true,
});
const collection = await db.collection({
name:'rams',
schema,
});
collection.sync({
remote: syncURL + dbName.toLowerCase() + '/',
options: {
live: true,
retry: true,
},
});
return db;
}
How can I fix this?
The createDatabase function is used like this in the documentation.
import {
createRxDatabase
} from 'rxdb';
import { getRxStorageDexie } from 'rxdb/plugins/dexie';
// create a database
const db = await createRxDatabase({
name: 'heroesdb', // the name of the database
storage: getRxStorageDexie()
});
By the way, PouchDB is deprecated in the RXDB document.
RxStorage PouchDB Usage like this.
import { createRxDatabase } from 'rxdb';
import { getRxStoragePouch, addPouchPlugin } from 'rxdb/plugins/pouchdb';
addPouchPlugin(require('pouchdb-adapter-idb'));
const db = await createRxDatabase({
name: 'exampledb',
storage: getRxStoragePouch(
'idb',
{
/**
* other pouchdb specific options
* #link https://pouchdb.com/api.html#create_database
*/
}
)
});

Next-Auth Store Session in Redis

I'm coming from express, never using next-auth before but unsure how to store user session in a redis database.
On express, I would have done the following;
import express from 'express';
import session from 'express-session';
import connectRedis from 'connect-redis';
import Redis from 'ioredis';
import { __prod__, COOKIE_NAME } from './constants';
const main = async () => {
const RedisStore = connectRedis(session);
const redis = new Redis(process.env.REDIS_URL);
app.use(
session({
name: 'qid',
store: new RedisStore({
client: redis,
disableTouch: true,
ttl: 1000 * 60 * 60 * 24 * 365, // 1 year
}),
cookie: {
maxAge: 1000 * 60 * 60 * 24 * 365, // 1 year
httpOnly: true,
sameSite: 'lax',
secure: __prod__,
domain: __prod__ ? process.env.DOMAIN : undefined,
},
saveUninitialized: false,
secret: process.env.SESSION_SECRET,
resave: false,
}),
);
};
main()
[...nextauth].ts
import NextAuth, { type NextAuthOptions } from "next-auth";
import CredentialsProvider from "next-auth/providers/credentials";
import { PrismaAdapter } from "#next-auth/prisma-adapter";
import { prisma } from "../../../server/db/client";
export const authOptions: NextAuthOptions = {
callbacks: {
session({ session, user }) {
if (session.user) {
session.user.id = user.id;
}
return session;
},
},
adapter: PrismaAdapter(prisma),
providers: [
CredentialsProvider({
async authorize(credentials, req) {
//
},
}),
],
};
export default NextAuth(authOptions);
I can't find any implementations of redis in NextAuth, other than using Upstash for caching, but not for sessions.
I made an adapter for Next Auth that uses ioredis to store data in the Hash data structure.
In the Upstash adapter, they store the data with JSON.stringify.
In my adapter, I use the Hash data structure so it is easier to extend the User object.
You can take a look at this repository.

reference cognito user pool created by sst (serverless-stack) in serverless.yml

I have quite a big app built with serverless and now we are trying out serverless-stack. I am trying to reference a user pool created by sst in serverless.yml function. Is it possible? Below are the steps I've tried:
I have created a user pool
import * as cdk from '#aws-cdk/core'
import * as cognito from '#aws-cdk/aws-cognito'
import * as sst from '#serverless-stack/resources'
export default class UserServiceStack extends sst.Stack {
constructor(scope: cdk.Construct, id: string, props: sst.StackProps = {}) {
super(scope, id, props)
const userPool = new cognito.UserPool(this, 'userPool', {
signInAliases: {
email: true,
phone: true,
},
autoVerify: {
email: true,
phone: true,
},
passwordPolicy: {
minLength: 8,
requireDigits: false,
requireLowercase: false,
requireSymbols: false,
requireUppercase: false,
},
signInCaseSensitive: false,
selfSignUpEnabled: true,
})
new cdk.CfnOutput(this, 'UserPoolId', {
value: userPool.userPoolId,
})
const userPoolClient = new cognito.UserPoolClient(this, 'userPoolClient', {
userPool,
authFlows: {
adminUserPassword: true,
userPassword: true,
},
})
new cdk.CfnOutput(this, 'UserPoolClientId', {
value: userPoolClient.userPoolClientId,
})
}
}
and want to update my post confirmation trigger defined in serverless.yml
...
createUser:
handler: createUser.default
events:
- cognitoUserPool:
pool: !ImportValue '${self:custom.sstApp}...' # what to put here?
trigger: PostConfirmation
existing: true
Figured it out.
First How to use cdk output variables in serverless.yml.
Export them into a file
AWS_PROFILE=<profile-name> npx sst deploy --outputs-file ./exports.json
and in serverless.yml you can reference it like so
...
createUser:
handler: createUser.default
events:
- cognitoUserPool:
pool: ${file(../../infrastructure/exports.json):${self:custom.sstApp}-UserServiceStack.userPoolName}
trigger: PostConfirmation
existing: true
Second. serverless is setup such that you have to pass userPoolName, not userPoolId. So I had to generate userpool name and output it
import * as uuid from 'uuid'
...
const userPoolName = uuid.v4()
const userPool = new cognito.UserPool(this, 'userPool', {
userPoolName,
...
})
...
// eslint-disable-next-line no-new
new cdk.CfnOutput(this, 'userPoolName', {
value: userPoolName,
})
Third to avoid AccessDeniedException when calling lambda as a trigger you need to add the following to your resources
- Resources:
OnCognitoSignupPermission:
Type: 'AWS::Lambda::Permission'
Properties:
Action: "lambda:InvokeFunction"
FunctionName:
Fn::GetAtt: [ "CreateUserLambdaFunction", "Arn"] # the name must be uppercased name of your lambda + LambdaFunction at the end
Principal: "cognito-idp.amazonaws.com"
SourceArn: ${file(../../infrastructure/exports.json):${self:custom.sstApp}-UserServiceStack.userPoolArn}

Serverless framework lambda function access denied to S3

Anyone have any ideas why I'm getting "Access Denied" when trying to put object into S3 inside a lambda function? I have the serverless AWS user with AdministorAccess and allow access to s3 resource inside serverless.yml:
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::*"
Edit - here are the files
serverless.yml
service: testtest
app: testtest
org: workx
provider:
name: aws
runtime: nodejs12.x
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::*/*"
functions:
hello:
handler: handler.hello
events:
- http:
path: users/create
method: get
handler.js
'use strict';
const AWS = require('aws-sdk');
// get reference to S3 client
const S3 = new AWS.S3();
// Uload the content to s3 and allow download
async function uploadToS3(content) {
console.log('going to upload to s3!');
const Bucket = 'mtest-exports';
const key = 'testtest.csv';
try {
const destparams = {
Bucket,
Key: key,
Body: content,
ContentType: "text/csv",
};
console.log('going to put object', destparams);
const putResult = await S3.putObject(destparams).promise();
return putResult;
} catch (error) {
console.log(error);
throw error;
}
}
module.exports.hello = async event => {
const result = await uploadToS3('hello world');
return {
statusCode: 200,
body: JSON.stringify(result),
};
};
I was using TypeScript plugin - #serverless/typescript. I used it to create Lambda function that will resize images that are uploaded to S3 + do some kind of content moderation.
Here is the content of serverless.ts file:
import type { AWS } from '#serverless/typescript';
import resizeImageLambda from '#functions/resizeImageLambda';
const serverlessConfiguration: AWS = {
service: 'myservice-image-resize',
frameworkVersion: '3',
plugins: ['serverless-esbuild'],
provider: {
name: 'aws',
stage: 'dev',
region: 'us-east-1',
profile: 'myProjectProfile', // reference to your local AWS profile created by serverless config command
// architecture: 'arm64', // to support Lambda w/ graviton
iam: {
role: {
statements: [
{
Effect: 'Allow',
Action: [
's3:GetObject',
's3:PutObject',
's3:PutObjectAcl',
's3:ListBucket',
'rekognition:DetectModerationLabels'
],
Resource: [
'arn:aws:s3:::myBucket/*',
'arn:aws:s3:::myBucket',
'arn:aws:s3:::/*',
'*'
]
},
{
Effect: 'Allow',
Action: [
's3:ListBucket',
'rekognition:DetectModerationLabels'
],
Resource: ['arn:aws:s3:::myBucket']
}
]
}
},
// architecture: 'arm64',
runtime: 'nodejs16.x',
environment: {
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
NODE_OPTIONS: '--enable-source-maps --stack-trace-limit=1000',
SOURCE_BUCKET_NAME:
'${self:custom.myEnvironment.SOURCE_BUCKET_NAME.${self:custom.myStage}}',
DESTINATION_BUCKET_NAME:
'${self:custom.myEnvironment.DESTINATION_BUCKET_NAME.${self:custom.myStage}}'
}
},
// import the function via paths
functions: { resizeImageLambda },
package: { individually: true },
custom: {
esbuild: {
bundle: true,
minify: false,
sourcemap: true,
exclude: ['aws-sdk'],
target: 'node16',
define: { 'require.resolve': undefined },
platform: 'node',
concurrency: 10,
external: ['sharp'],
packagerOptions: {
scripts:
'rm -rf node_modules/sharp && SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm install --arch=x64 --platform=linux --libc=glibc sharp'
}
},
myEnvironment: {
SOURCE_BUCKET_NAME: {
dev: 'myBucket',
prod: 'myBucket-prod'
},
DESTINATION_BUCKET_NAME: {
dev: 'myBucket',
prod: 'myBucketProd'
}
},
myStage: '${opt:stage, self:provider.stage}'
}
};
module.exports = serverlessConfiguration;
resizeImageLambda.ts
/* eslint-disable no-template-curly-in-string */
// import { Config } from './config';
export const handlerPath = (context: string) =>
`${context.split(process.cwd())[1].substring(1).replace(/\\/g, '/')}`;
export default {
handler: `${handlerPath(__dirname)}/handler.main`,
events: [
{
s3: {
bucket: '${self:custom.myEnvironment.SOURCE_BUCKET_NAME.${self:custom.myStage}}',
event: 's3:ObjectCreated:*',
existing: true,
forceDeploy: true // for existing buckets
}
}
],
timeout: 15 * 60, // 15 min
memorySize: 2048
};
I remember there were few issues when I wanted to connect it to existing buckets (created outside serverless framework) such as IAM policy was not re-created / updated properly (see forceDeploy end existing parameters in function.events[0].s3 properties in resizeLambda.ts file)
Turns out I was an idiot and have the custom config in the wrong place and ruin the serverless.yml file!

Vue.js and Pouch - PouchDB is not a constructor

Am building a simple Vue project to get to grips with simple store (vuex) and then using PouchDB. I am hitting an odd error and am struggling to see
where to go next
const PouchDB = require('pouchdb')
import EntryForm from '#/components/EntryForm.vue'
import DisplayText from '#/components/DisplayText.vue'
export default {
name: 'entryform',
components: {
EntryForm,
DisplayText
},
data() {
return {
db: [],
remote: [],
stepfwd: [],
stepback: [],
newMutation: true
}
},
created() {
//window.db or this.db does it matter ?
this.db = new PouchDB('icecream')
this.remote = 'http://localhost:5984/icecream'
this.db.sync(this.remote, { live: true, retry: true })
//any mutations are put into the stepfwd array and put into the db array
this.$store.subscribe(mutation => {
if (mutation.type !== CLEAR_STATE) {
this.stepfwd.push(mutation)
this.db.push(mutation)
}
if (this.newMutation) {
this.stepback = []
}
})
}
Any helpers much appreciated.
all the branch code can be found here for review
https://gitlab.adamprocter.co.uk/adamprocter/simplevuestore/tree/pouchdb