Multiple Redis clients with different persistence? - redis

I need to have 3 different redis dbs, so I have 3 different redis clients, like so:
const gCDCache = redis.createClient();
const CDCache = redis.createClient();
const vcsCache = redis.createClient();
I need the first 2 caches to not persist, seen that they are just cool-downs caches.
The third cache instead needs to persist because it contains somewhat important data.
Is there a way to have different persistence policies between different client?
What is the best way to accomplish that?
I searched online a bit but couldn't find an answer, anything is appreciated, thanks.
For context, this is my caches.js file (there are 4 caches there):
// Bot redis caches
const redis = require("redis");
const { promisify } = require("util");
// Global Cooldown cache
const gCDCache = redis.createClient();
const setGCD = promisify(gCDCache.set).bind(gCDCache);
const getGCD = promisify(gCDCache.get).bind(gCDCache);
const existsGCD = promisify(gCDCache.exists).bind(gCDCache);
const delGCD = promisify(gCDCache.del).bind(gCDCache);
gCDCache.on("error", (error) => {
console.error(error);
});
// Cooldown cache
const CDCache = redis.createClient();
const getCD = promisify(CDCache.get).bind(CDCache);
const setCD = promisify(CDCache.set).bind(CDCache);
const existsCD = promisify(CDCache.exists).bind(CDCache);
const delCD = promisify(CDCache.del).bind(CDCache);
CDCache.on("error", (error) => {
console.error(error);
});
// Guild Settings Cache
const gCache = redis.createClient();
const setGuildSettings = promisify(gCache.set).bind(gCache);
const getGuildSettings = promisify(gCache.get).bind(gCache);
const existsGuildSettings = promisify(gCache.exists).bind(gCache);
const delGuildSettings = promisify(gCache.del).bind(gCache);
gCache.on("error", (error) => {
console.error(error);
});
// VCs Cache
const vcsCache = redis.createClient();
const setVCs = promisify(vcsCache.set).bind(vcsCache);
const getVCs = promisify(vcsCache.get).bind(vcsCache);
vcsCache.on("error", (error) => {
console.error(error);
});
const caches = {
gCache: gCache,
setGuildSettings: setGuildSettings,
getGuildSettings: getGuildSettings,
existsGuildSettings: existsGuildSettings,
delGuildSettings: delGuildSettings,
vcsCache: vcsCache,
setVCs: setVCs,
getVCs: getVCs,
gCDCache: gCDCache,
setGCD: setGCD,
getGCD: getGCD,
existsGCD: existsGCD,
delGCD: delGCD,
CDCache: CDCache,
getCD: getCD,
setCD: setCD,
existsCD: existsCD,
delCD: delCD
}
module.exports.caches = caches;
console.log('Astro\'s Caches Loaded');

Redis persistences are configured in server side not client side. And it is not possible to have different persistence policy for one Redis instance.
For you case, if you want to have different persistence policy(RDB, AOF, or just disabled), you can set up multiple redis servers and create different redis client for those servers.
For node-redis, you can create different client for different servers as below:
redis.createClient([options])
redis.createClient(unix_socket[, options])
redis.createClient(redis_url[, options])
redis.createClient(port[, host][, options])

Related

react-native (Expo) upload file on background

In my Expo (react-native) application, I want to do the upload task even if the application is in the background or killed.
the upload should be done to firebase storage, so we don't have a REST API.
checked out the Expo task manager library, but I could not figure out how it should be done. is it even possible to achieve this goal with Expo? is the TaskManager the correct package for this task?
there are only some Expo packages that could be registered as a task (e.g. backgroundFetch), and it is not possible to register a custom function (in this case uploadFile method).
I even got more confused as we should enable add UIBackgroundModes key for iOS but it only has audio,location,voip,external-accessory,bluetooth-central,bluetooth-peripheral,fetch,remote-notification,processing as possible values.
I would appreciate it if you can at least guide me on where to start or what to search for, to be able to upload the file even if the app is in the background is killed/terminated.
import { getStorage, ref, uploadBytes } from "firebase/storage";
const storage = getStorage();
const storageRef = ref(storage, 'videos');
const uploadFile = async (file)=>{
// the file is Blob object
await uploadBytes(storageRef, file);
}
I have already reviewed react-native-background-fetch, react-native-background-upload, react-native-background-job . upload should eject Expo, job does not support iOS, and fetch is a fetching task designed for doing task in intervals.
if there is a way to use mentioned libraries for my purpose, please guide me :)
to my understanding, the Firebase Cloud JSON API does not accept files, does it ? if so please give me an example. If I can make storage json API work with file upload, then I can use Expo asyncUpload probably without ejecting.
I have done something similar like you want, you can use expo-task-manager and expo-background-fetch. Here is the code as I used it. I Hope this would be useful for you.
import * as BackgroundFetch from 'expo-background-fetch';
import * as TaskManager from 'expo-task-manager';
const BACKGROUND_FETCH_TASK = 'background-fetch';
const [isRegistered, setIsRegistered] = useState(false);
const [status, setStatus] = useState(null);
//Valor para que se ejecute en IOS
BackgroundFetch.setMinimumIntervalAsync(60 * 15);
// Define the task to execute
TaskManager.defineTask(BACKGROUND_FETCH_TASK, async () => {
const now = Date.now();
console.log(`Got background fetch call at date: ${new Date(now).toISOString()}`);
// Your function or instructions you want
return BackgroundFetch.Result.NewData;
});
// Register the task in BACKGROUND_FETCH_TASK
async function registerBackgroundFetchAsync() {
return BackgroundFetch.registerTaskAsync(BACKGROUND_FETCH_TASK, {
minimumInterval: 60 * 15, // 1 minutes
stopOnTerminate: false, // android only,
startOnBoot: true, // android only
});
}
// Task Status
const checkStatusAsync = async () => {
const status = await BackgroundFetch.getStatusAsync();
const isRegistered = await TaskManager.isTaskRegisteredAsync(
BACKGROUND_FETCH_TASK
);
setStatus(status);
setIsRegistered(isRegistered);
};
// Check if the task is already register
const toggleFetchTask = async () => {
if (isRegistered) {
console.log('Task ready');
} else {
await registerBackgroundFetchAsync();
console.log('Task registered');
}
checkStatusAsync();
};
useEffect(() => {
toggleFetchTask();
}, []);
Hope this isn't too late to be helpful.
I've been dealing with a variety of expo <-> firebase storage integrations recently, and here's some info that might be helpful.
First, I'd recommend not using the uploadBytes / uploadBytesResumable methods from Firebase. This Thread has a long ongoing discussion about it, but basically it's broken in v9. Maybe in the future the Firebase team will solve the issues, but it's pretty broken with Expo right now.
Instead, I'd recommend either going down the route of writing a small Firebase function that either gives a signed-upload-url or handles the upload itself.
Basically, if you can get storage uploads to work via an http endpoint, you can get any kind of upload mechanism working. (e.g. the FileSystem.uploadAsync() method you're probably looking for here, like #brentvatne pointed out, or fetch, or axios. I'll show a basic wiring at the end).
Server Side
Option 1: Signed URL Upload.
Basically, have a small firebase function that returns a signed url. Your app calls a cloud function like /get-signed-upload-url , which returns the url, which you then use. Check out: https://cloud.google.com/storage/docs/access-control/signed-urls for how you'd go about this.
This might work well for your use case. It can be configured just like any httpsCallable function, so it's not much work to set up, compared to option 2.
However, this doesn't work for the firebase storage / functions emulator! For this reason, I don't use this method, because I like to intensively use the emulators, and they only offer a subset of all the functionalities.
Option 2: Upload the file entirely through a function
This is a little hairier, but gives you a lot more fidelity over your uploads, and will work on an emulator! I like this too because it allows doing upload process within the endpoint execution, instead of as a side effect.
For example, you can have a photo-upload endpoint generate thumbnails, and if the endpoint 201's, then you're good! Rather than the traditional Firebase approach of having a listener to cloud storage which would generate thumbnails as a side effect, which then has all kinds of bad race conditions (checking for processing completion via exponentiational backoff? Gross!)
Here are three resources I'd recommend to go about this approach:
https://cloud.google.com/functions/docs/writing/http#multipart_data
https://github.com/firebase/firebase-js-sdk/issues/5848
https://github.com/mscdex/busboy
Basically, if you can make a Firebase cloud endpoint that accepts a File within formdata, you can have busboy parse it, and then you can do anything you want with it... like upload it to Cloud Storage!
an outline of this:
import * as functions from "firebase-functions";
import * as busboy from "busboy";
import * as os from "os";
import * as path from "path";
import * as fs from "fs";
type FieldMap = {
[fieldKey: string]: string;
};
type Upload = {
filepath: string;
mimeType: string;
};
type UploadMap = {
[fileName: string]: Upload;
};
const MAX_FILE_SIZE = 2 * 1024 * 1024; // 2MB
export const uploadPhoto = functions.https.onRequest(async (req, res) => {
verifyRequest(req); // Verify parameters, auth, etc. Better yet, use a middleware system for this like express.
// This object will accumulate all the fields, keyed by their name
const fields: FieldMap = {};
// This object will accumulate all the uploaded files, keyed by their name.
const uploads: UploadMap = {};
// This will accumulator errors during the busboy process, allowing us to end early.
const errors: string[] = [];
const tmpdir = os.tmpdir();
const fileWrites: Promise<unknown>[] = [];
function cleanup() {
Object.entries(uploads).forEach(([filename, { filepath }]) => {
console.log(`unlinking: ${filename} from ${path}`);
fs.unlinkSync(filepath);
});
}
const bb = busboy({
headers: req.headers,
limits: {
files: 1,
fields: 1,
fileSize: MAX_FILE_SIZE,
},
});
bb.on("file", (name, file, info) => {
verifyFile(name, file, info); // Verify your mimeType / filename, etc.
file.on("limit", () => {
console.log("too big of file!");
});
const { filename, mimeType } = info;
// Note: os.tmpdir() points to an in-memory file system on GCF
// Thus, any files in it must fit in the instance's memory.
console.log(`Processed file ${filename}`);
const filepath = path.join(tmpdir, filename);
uploads[filename] = {
filepath,
mimeType,
};
const writeStream = fs.createWriteStream(filepath);
file.pipe(writeStream);
// File was processed by Busboy; wait for it to be written.
// Note: GCF may not persist saved files across invocations.
// Persistent files must be kept in other locations
// (such as Cloud Storage buckets).
const promise = new Promise((resolve, reject) => {
file.on("end", () => {
writeStream.end();
});
writeStream.on("finish", resolve);
writeStream.on("error", reject);
});
fileWrites.push(promise);
});
bb.on("close", async () => {
await Promise.all(fileWrites);
// Fail if errors:
if (errors.length > 0) {
functions.logger.error("Upload failed", errors);
res.status(400).send(errors.join());
} else {
try {
const upload = Object.values(uploads)[0];
if (!upload) {
functions.logger.debug("No upload found");
res.status(400).send("No file uploaded");
return;
}
const { uploadId } = await processUpload(upload, userId);
cleanup();
res.status(201).send({
uploadId,
});
} catch (error) {
cleanup();
functions.logger.error("Error processing file", error);
res.status(500).send("Error processing file");
}
}
});
bb.end(req.rawBody);
});
Then, that processUpload function can do anything you want with the file, like upload it to cloud storage:
async function processUpload({ filepath, mimeType }: Upload, userId: string) {
const fileId = uuidv4();
const bucket = admin.storage().bucket();
await bucket.upload(filepath, {
destination: `users/${userId}/${fileId}`,
{
contentType: mimeType,
},
});
return { fileId };
}
Mobile Side
Then, on the mobile side, you can interact with it like this:
async function uploadFile(uri: string) {
function getFunctionsUrl(): string {
if (USE_EMULATOR) {
const origin =
Constants?.manifest?.debuggerHost?.split(":").shift() || "localhost";
const functionsPort = 5001;
const functionsHost = `http://${origin}:${functionsPort}/{PROJECT_NAME}/${PROJECT_LOCATION}`;
return functionsHost;
} else {
return `https://{PROJECT_LOCATION}-{PROJECT_NAME}.cloudfunctions.net`;
}
}
// The url of your endpoint. Make this as smart as you want.
const url = `${getFunctionsUrl()}/uploadPhoto`;
await FileSystem.uploadAsync(uploadUrl, uri, {
httpMethod: "POST",
uploadType: FileSystem.FileSystemUploadType.MULTIPART,
fieldName: "file", // Important! make sure this matches however you want bussboy to validate the "name" field on file.
mimeType,
headers: {
"content-type": "multipart/form-data",
Authorization: `${idToken}`,
},
});
});
TLDR
Wrap Cloud Storage in your own endpoint, treat it like a normal http upload, everything plays nice.

store sql queries as string in node server to get them as a response(express)

I am trying to do something may or may not be possible.
I have a SQL file called "travel.sql" that I am trying to make an api out of, so I thought the simplest thing to do is to save the queries as strings in an array and then save the array of strings as a response for a node server(express.js)
so simply here's the code till now but this is returning nothing in postman and I don't know what's missing or not
I checked all the packages and they are installed properly
const express = require('express')
const fse = require( "fs-extra" );
const { join } = require( "path" );
const app = express()
const port = 3000
app.get('/sqlfile', (req, res) => {
const loadSqlQueries = async folderName => {
// determine the file path for the folder
const filePath = join( process.cwd(), travel );
// get a list of all the files in the folder
const files = await fse.readdir( filePath );
// only files that have the .sql extension
const sqlFiles = files.filter( f => f.endsWith( ".sql" ) );
// loop over the files and read in their contents
const queries = {};
for ( let i = 0; i < sqlFiles.length; i++ ) {
const query = fse.readFileSync( join( filePath, sqlFiles[ i ] ), { encoding: "UTF-8" } );
queries[ sqlFiles[ i ].replace( ".sql", "" ) ] = query;
console.log(queries)
}
return queries;
res.send(queries);
};
})
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
I'm not quite sure of what you are trying to achieve, But anyway You have multiple parts of your code need to be enhanced:
As a first proposal I suggest to add a "try and catch" to your code so you can know the errors you are facing.
You are creating a function expression "loadSqlQueries" which I think is not needed and it never runs as you are just creating it but you never used it.
As the function expression is not needed then also the "return" is not needed.
To be able to use "await" like here: const files = await fse.readdir( filePath ); You need to use it inside "async" function.
You are using "travel" here const filePath = join( process.cwd(), travel ); as a variable, you need to use it as a string like this const filePath = join( process.cwd(), "travel" );
I've applied the above mentioned changes, kindly read the comments I added to your code to catch the changes and here is the final code:
const express = require('express')
const fse = require("fs-extra");
const { join } = require("path");
const app = express()
const port = 3000
app.get('/sqlfile',
// add async to be able to use await
async (req, res) => {
// add try and catch block to your code to catch the errors
try {
// no need for the function expression which is never used
// const loadSqlQueries = async folderName => {
// determine the file path for the folder
//use travel as a string not a variable
const filePath = join(process.cwd(), "travel");
// get a list of all the files in the folder
const files = await fse.readdir(filePath);
// only files that have the .sql extension
const sqlFiles = files.filter(f => f.endsWith(".sql"));
// loop over the files and read in their contents
const queries = {};
for (let i = 0; i < sqlFiles.length; i++) {
const query = fse.readFileSync(join(filePath, sqlFiles[i]), { encoding: "UTF-8" });
queries[sqlFiles[i].replace(".sql", "")] = query;
console.log(queries)
}
// As the function expression is not used we will comment return
// return queries;
res.send(queries);
// }
} catch (error) {
console.log(error);
}
})
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})

Serving public and private ports using Nestjs

I'm building a that aims to serve a mobile application. Besides serving the client, it will have several back-office functionalities.
We are using swagger and we do want to be able to access the swagger docs of our back-office endpoints. However, we do not want to expose all of our endpoints publicly.
Assuming that having all endpoints public is a bad option one solutions we are thinking of is letting our server serve two ports, and then only exposing one port to the public. We have created a small sample repo that that serves a client module and a back-office module on two different ports.
The main.ts looks like the following:
import { NestFactory } from '#nestjs/core';
import { ClientModule } from './modules/client/client.module';
import * as express from 'express';
import * as http from 'http';
import {ExpressAdapter} from '#nestjs/platform-express';
import { BackOfficeModule } from './modules/backoffice/backoffice.module';
import { SwaggerModule, DocumentBuilder } from '#nestjs/swagger';
async function bootstrap() {
const clientServer = express();
const clientApp = await NestFactory.create(
ClientModule,
new ExpressAdapter(clientServer),
);
const clientOptions = new DocumentBuilder()
.setTitle('ClientServer')
.setDescription('The client server API description')
.setVersion('1.0')
.addTag('client')
.build();
const clientDocument = SwaggerModule.createDocument(clientApp, clientOptions);
SwaggerModule.setup('api', clientApp, clientDocument);
await clientApp.init();
const backOfficeServer = express();
const backOfficeApp = await NestFactory.create(
BackOfficeModule,
new ExpressAdapter(backOfficeServer),
);
const backOfficeOptions = new DocumentBuilder()
.setTitle('BackOffice')
.setDescription('The back office API description')
.setVersion('1.0')
.addTag('backOffice')
.build();
const backOfficeDocument = SwaggerModule.createDocument(backOfficeApp, backOfficeOptions);
SwaggerModule.setup('api', backOfficeApp, backOfficeDocument);
await backOfficeApp.init();
http.createServer(clientServer).listen(3000); // The public port (Load balancer will route traffic to this port)
http.createServer(backOfficeServer).listen(4000); // The private port (Will be accessed through a bastian host or similar)
}
bootstrap();
Another option would be to create a bigger separation of the codebase and infrastructure, however as this is a very early stage we feel that is unnecessary.
Our question to the Nest community is thus, has anyone done this? If so, what is are your experience? What are the drawbacks to separating our backend code like this?
Disclaimer: this solution is for express+REST combination.
Routing
Even thought nestjs can't separate controller's based on port, it can separate them based on host. Using that, you can add a reverse proxy in front of your application, that modifies the host header based on the port. Or, you can do that in an express middleware, to make things even more simpe. This is what I did:
async function bootstrap() {
const publicPort = 3000
const privatePort = 4000
const server = express()
server.use((req, res, next) => {
// act as a proper reverse proxy and set X-Forwarded-Host header if it hasn't been set
req.headers['x-forwarded-host'] ??= req.headers.host
switch (req.socket.localPort) {
case publicPort:
req.headers.host = 'public'
break
case privatePort:
req.headers.host = 'private'
break
default:
// this shouldn't be possible
res.sendStatus(500)
return
}
next()
})
const app = await NestFactory.create(AppModule, new ExpressAdapter(server))
http.createServer(server).listen(publicPort)
http.createServer(server).listen(privatePort)
}
Controllers:
#Controller({ path: 'cats', host: 'public' })
export class CatsController {...}
#Controller({ path: 'internal' host: 'private' })
export class InternalController {...}
Alternatively, you can simplify by creating your own PublicController and PrivateController decorators:
// decorator for public controllers, also sets guard
export const PublicController = (path?: string): ClassDecorator => {
return applyDecorators(Controller({ path, host: 'public' }), UseGuards(JwtAuthGuard))
}
// decorator for private controllers
export const PrivateController = (path?: string): ClassDecorator => {
return applyDecorators(Controller({ path, host: 'private' }))
}
#PublicController('cats')
export class CatsController {...}
#PrivateController('internal')
export class InternalController {...}
Swagger
For swagger, SwaggerModule.createDocument has an option "include", which accepts a list of modules to include in the swagger docs. With a bit of effort we can also turn the swagger serving part into an express Router, so both the private and public swagger can be served on the same path, for the different ports:
async function bootstrap() {
const publicPort = 3000
const privatePort = 4000
const server = express()
server.use((req, res, next) => {
// act as a proper reverse proxy and set X-Forwarded-Host header if it hasn't been set
req.headers['x-forwarded-host'] ??= req.headers.host
switch (req.socket.localPort) {
case publicPort:
req.headers.host = 'public'
break
case privatePort:
req.headers.host = 'private'
break
default:
// this shouldn't be possible
res.sendStatus(500)
return
}
next()
})
const app = await NestFactory.create(AppModule, new ExpressAdapter(server))
// setup swagger
let publicSwaggerRouter = await createSwaggerRouter(app, [CatsModule])
let privateSwaggerRouter: await createSwaggerRouter(app, [InternalModule])
server.use('/api', (req: Request, res: Response, next: NextFunction) => {
switch (req.headers.host) {
case 'public':
publicSwaggerRouter(req, res, next)
return
case 'private':
privateSwaggerRouter(req, res, next)
return
default:
// this shouldn't be possible
res.sendStatus(500)
return
}
})
http.createServer(server).listen(publicPort)
http.createServer(server).listen(privatePort)
}
async function createSwaggerRouter(app: INestApplication, modules: Function[]): Promise<Router> {
const swaggerConfig = new DocumentBuilder().setTitle('MyApp').setVersion('1.0').build()
const document = SwaggerModule.createDocument(app, swaggerConfig, { include: modules })
const swaggerUi = loadPackage('swagger-ui-express', 'SwaggerModule', () => require('swagger-ui-express'))
const swaggerHtml = swaggerUi.generateHTML(document)
const router = Router()
.use(swaggerUi.serveFiles(document))
.get('/', (req: Request, res: Response, next: NextFunction) => {
res.send(swaggerHtml)
})
return router
}
That's ok, but if you want to run two servers on 1 host, I would recommend to create two files like main-client.ts and main-back-office.ts and run them in different processes, because in that case failures of one server would not affect work of another.
Also if you are not run this in Docker I would suggest tools like forever, pm2, supervisor or my own very small library workers-cluster
If you run it in Docker and don't want big refactoring, I would recommend to create
single Dockerfile with running different CMD or ENTRYPOINT commands
The NestJS docs cover how to let one server serve multiple ports:
https://docs.nestjs.com/faq/multiple-servers#multiple-simultaneous-servers
The following recipe shows how to instantiate a Nest application that listens on multiple ports (for example, on a non-HTTPS port and an HTTPS port) simultaneously.
const httpsOptions = {
key: fs.readFileSync('./secrets/private-key.pem'),
cert: fs.readFileSync('./secrets/public-certificate.pem'),
};
const server = express();
const app = await NestFactory.create(
ApplicationModule,
new ExpressAdapter(server),
);
await app.init();
http.createServer(server).listen(3000);
https.createServer(httpsOptions, server).listen(443);

How to use GraphQL subscriptions with Postgraphile?

I am very new to GraphQL, so maybe I am just missing something here.
i am currently creating an GraphQL endpoint of a Postgres SQL Database using the PostGraphile
I am able to perform queries, but i just found out about GraphQL Subscriptions and apparently PostGraphile supports it as explained here. Here its what I doing.
Express Server
const express = require("express");
const { postgraphile, makePluginHook } = require("postgraphile");
const { default: PgPubsub } = require("#graphile/pg-pubsub");
const pluginHook = makePluginHook([PgPubsub]);
var cors = require('cors');
const app = express();
app.use(cors());
let url = "postgres://postgres:docker#localhost:5432/dvdrental";
app.use(postgraphile(url,"public", {
graphiql:true,
pluginHook,
subscriptions: true,
simpleSubscriptions: true,
}));
app.listen(3000);
Subscribing like
subscription {
listen(topic: "hello") {
query {
allPayments {
nodes {
customerId
paymentDate
nodeId
}
}
}
}
}
Triggering like
postgres=# select pg_notify( 'postgraphile:hello', '{}' );
But I am not seeing anything on the graphiql response
WebSocket/Network
Here it is what I get at the WebSocket/Network panel #Benjie.
The {type:"ka} actions appears to be fired in a random interval.

How to have a Foxx service use base collection instead of mount specific ones

How can I have a Foxx service use base collections for auth operations? For example I want the User management tutorial at https://docs.arangodb.com/3.3/Manual/Foxx/Users.html
to use collections "users" and "sessions" instead of "test_users" and "test_sessions", where "test" is the name of my mountpoint.
I want to run multiple services all working off the same base collections. But if I go with whats given in the tutorials, I end up with auth collections and routes which are specific to a service, which doesnt males much sense to me.
My setup.js is;
'use strict';
const db = require('#arangodb').db;
const sessions = module.context.collectionName('sessions');
const users = module.context.collectionName('users');
if (!db._collection(sessions)) {
db._createDocumentCollection(sessions);
}
if (!db._collection(users)) {
db._createDocumentCollection(users);
}
db._collection(users).ensureIndex({
type: 'hash',
fields: ['username'],
unique: true
});
and my index.js is;
'use strict';
const joi = require('joi');
const createAuth = require('#arangodb/foxx/auth');
const createRouter = require('#arangodb/foxx/router');
const sessionsMiddleware = require('#arangodb/foxx/sessions');
// const db = require('#arangodb').db;
const auth = createAuth();
const router = createRouter();
const users = db._collection('users');
const sessions = sessionsMiddleware({
storage: module.context.collection('sessions'),
transport: 'cookie'
});
module.context.use(sessions);
module.context.use(router);
// continued
router.post('/signup', function (req, res) {
const user = {};
try {
user.authData = auth.create(req.body.password);
user.username = req.body.username;
user.perms = [];
const meta = users.save(user);
Object.assign(user, meta);
} catch (e) {
// Failed to save the user
// We'll assume the uniqueness constraint has been violated
res.throw('bad request', 'Username already taken', e);
}
req.session.uid = user._key;
req.sessionStorage.save(req.session);
res.send({success: true});
})
.body(joi.object({
username: joi.string().required(),
password: joi.string().required()
}).required(), 'Credentials')
.description('Creates a new user and logs them in.');
I tried using const users = db._collection('users'); instead of const users = module.context.collection('users'); but that throws swagger api errors.
to achieve that you need to change the assignment of collection names from module.context.collectionName('nameOfCollection') to 'nameOfCollection' in all files, because module.context.collectionName prefixes string with name of service
so
setup.js
const sessions = 'sessions';
const users = 'users';
index.js
const users = db._collection('users');
const sessions = sessionsMiddleware({
storage: 'sessions',
transport: 'cookie'
});
however, that approach is antipattern for case when more services need access to same underlying collections (for example teardown of one service can delete those collections for other services).
for that case you should utilize dependencies, only your auth service should have access to its own collections and other services should have auth service as dependency and access auth data through auth service.
auth service needs to have
in manifest.json
"provides": {
"myauth": "1.0.0"
}
in index.js or what file you pointing as main in manifest.json
module.exports = {
isAuthorized (id) {
return false; // your code for validating if user is authorized
}
};
consuming service needs to have
in manifest.json
"dependencies": {
"myauth": {
"name": "myauth",
"version": "^1.0.0",
"description": "Auth service.",
"required": true
}
}
and then you can register it and call it
const myauth = module.context.dependencies.myauth;
if (myauth.isAuthorized()) {
// your code
} else {
res.throw(401);
}
for further steps in terms of authorization of requests check how to use Middleware and Session middleware
god speed