Deno seems targeting text files, but I also need to serve image files for the website.
You can use send()
The function send() is designed to serve static content as part of a
middleware function. In the most straight forward usage, a root is
provided and requests provided to the function are fulfilled with
files from the local file system relative to the root from the
requested path.
const app = new Application();
app.use(async (context) => {
await send(context, context.request.url.pathname, {
root: `${Deno.cwd()}/static`
});
});
await app.listen({ port: 8000 });
With the following directory structure:
static/
image.jpg
server.js
You can access the image by going to http://localhost:8000/image.jpg
Basically, you just need to set the correct headers for your image type, and supply the image data as a Unit8Array:
In your middleware:
app.use(async (ctx, next) => {
// ...
const imageBuf = await Deno.readFile(pngFilePath);
ctx.response.body = imageBuf;
ctx.response.headers.set('Content-Type', 'image/png');
});
Here's a complete working example, which will download a sample image (the digitized version of the hand-drawn deno logo) and serve it at http://localhost:8000/image, and display "Hello world" at all other addresses. The run options are in the comment on the first line:
server.ts
// deno run --allow-net=localhost:8000,deno.land --allow-read=deno_logo.png --allow-write=deno_logo.png server.ts
import {Application} from 'https://deno.land/x/oak#v5.3.1/mod.ts';
import {exists} from 'https://deno.land/std#0.59.0/fs/exists.ts';
// server listen options
const listenOptions = {
hostname: 'localhost',
port: 8000,
};
// sample image
const imageFilePath = './deno_logo.png';
const imageSource = 'https://deno.land/images/deno_logo.png';
const ensureLocalFile = async (localPath: string, url: string): Promise<void> => {
const fileExists = await exists(localPath);
if (fileExists) return;
console.log(`Downloading ${url} to ${localPath}`);
const response = await fetch(url);
if (!response.ok) throw new Error('Response not OK');
const r = response.body?.getReader;
const buf = new Uint8Array(await response.arrayBuffer());
await Deno.writeFile(imageFilePath, buf);
console.log('File saved');
};
await ensureLocalFile(imageFilePath, imageSource);
const app = new Application();
app.use(async (ctx, next) => {
// only match /image
if (ctx.request.url.pathname !== '/image') {
await next(); // pass control to next middleware
return;
}
const imageBuf = await Deno.readFile(imageFilePath);
ctx.response.body = imageBuf;
ctx.response.headers.set('Content-Type', 'image/png');
});
// default middleware
app.use((ctx) => {
ctx.response.body = "Hello world";
});
// log info about server
app.addEventListener('listen', ev => {
const defaultPortHttp = 80;
const defaultPortHttps = 443;
let portString = `:${ev.port}`;
if (
(ev.secure && ev.port === defaultPortHttps)
|| (!ev.secure && ev.port === defaultPortHttp)
) portString = '';
console.log(`Listening at http${ev.secure ? 's' : ''}://${ev.hostname ?? '0.0.0.0'}${portString}`);
console.log('Use ctrl+c to stop\n');
});
await app.listen(listenOptions);
Register middleware like this:
// serve static files
app.use(async (context, next) => {
try {
await context.send({
root: `${Deno.cwd()}/wwwroot/static`,
index: "index.html",
});
} catch {
await next();
}
});
Related
I have a documents router which has router.post('/mine', [auth, uploadFile], async (req, res) => { ... }) route handler. The actual implementation of this route handler is below.
documents.js router
const createError = require('./../helpers/createError');
const auth = require('./../middlewares/auth');
const uploadFile = require('./../middlewares/uploadFile');
const express = require('express');
const router = express.Router();
router.post('/mine', [auth, uploadFile], async (req, res) => {
try {
let user = await User.findById(req.user._id);
let leftDiskSpace = await user.leftDiskSpace();
if(leftDiskSpace < 0) {
await accessAndRemoveFile(req.file.path);
res.status(403).send(createError('Your plan\'s disk space is exceeded.', 403));
} else {
let document = new Document({
filename: req.file.filename,
path: `/uploads/${req.user.username}/${req.file.filename}`,
size: req.file.size
});
document = await document.save();
user.documents.push(document._id);
user = await user.save();
res.send(document);
}
} catch(ex) {
res.status(500).send(createError(ex.message, 500));
}
});
module.exports = router;
I'm currently writing integration tests using Jest and Supertest. My current documents.test.js test file is below:
documents.test.js test file
const request = require('supertest');
const { Document } = require('../../../models/document');
const { User } = require('../../../models/user');
const fs = require('fs');
const path = require('path');
let server;
describe('/api/documents', () => {
beforeEach(() => { server = require('../../../bin/www'); });
afterEach(async () => {
let pathToTestFolder = path.join(process.cwd(), config.get('diskStorage.destination'), 'user');
// Remove test uploads folder for next tests
await fs.promises.access(pathToTestFolder)
.then(() => fs.promises.rm(pathToTestFolder, { recursive: true }))
.catch((err) => { return; });
// Remove all users and documents written in test database
await User.deleteMany({});
await Document.deleteMany({});
server.close();
});
describe('POST /mine', () => {
it('should call user.leftDiskSpace method once', async () => {
let user = new User({
username: 'user',
password: '1234'
});
user = await user.save();
let token = user.generateAuthToken();
let file = path.join(process.cwd(), 'tests', 'integration', 'files', 'test.json');
let documentsRouter = require('../../../routes/documents');
let errorToThrow = new Error('An error occured...');
user.leftDiskSpace = jest.fn().mockRejectedValue(errorToThrow);
let mockReq = { user: user };
let mockRes = {};
documentsRouter.post = jest.fn();
documentsRouter.post.mockImplementation((path, callback) => {
if(path === '/mine') {
console.warn('called');
callback(mockReq, mockRes);
}
});
const res = await request(server)
.post('/api/documents/mine')
.set('x-auth-token', token)
.attach('document', file);
expect(documentsRouter.post).toHaveBeenCalled();
expect(user.leftDiskSpace).toHaveBeenCalled();
});
});
});
I create mock post router handler for documents.js router. As you can see from mockImplementation for this route handler, it checks if the path is equal to '/mine' (which is my supertest endpoint), then calls console.warn('called'); and callback. When I run this test file, I can not see any yellow warning message with body 'called'. And also when POST request endpoint /api/documents/mine the server doesn't trigger my mock function documentsRouter.post. It has never been called. So I think the server's documents router is not getting replaced with my mock post route handler. It still uses original post route handler to respond my POST request. What should I do to test if my mock documentsRouter.post function have been called?
Note that my User model has a custom method for checking left disk space of user. I also tried to mock that mongoose custom method but It also doesn't work.
I tried to solve the problem but I don't understand why the file is uploaded but his size is 0Kb.
I see this code in the tutorial but he works on that tutorial but, is not worked for me
const { ApolloServer, gql } = require('apollo-server');
const path = require('path');
const fs = require('fs');
const typeDefs = gql`
type File {
url: String!
}
type Query {
hello: String!
}
type Mutation {
fileUpload(file: Upload!): File!
}
`;
const resolvers = {
Query: {
hello: () => 'Hello world!',
},
Mutation: {
fileUpload: async (_, { file }) => {
const { createReadStream, filename, mimetype, encoding } = await file;
const stream = createReadStream();
const pathName = path.join(__dirname, `/public/images/${filename}`);
await stream.pipe(fs.createWriteStream(pathName));
return {
url: `http://localhost:4000/images/${filename}`,
};
},
},
};
const server = new ApolloServer({
typeDefs,
resolvers,
});
server.listen().then(({ url }) => {
console.log(`๐ Server ready at ${url}`);
});
then when I upload the file, it is uploaded, but the file is 0kb
like this
What is happening is the resolver is returning before the file has uploaded, causing the server to respond before the client has finished uploading. You need to promisify and await the file upload stream events in the resolver.
Here is an example:
https://github.com/jaydenseric/apollo-upload-examples/blob/c456f86b58ead10ea45137628f0a98951f63e239/api/server.js#L40-L41
In your case:
const resolvers = {
Query: {
hello: () => "Hello world!",
},
Mutation: {
fileUpload: async (_, { file }) => {
const { createReadStream, filename } = await file;
const stream = createReadStream();
const path = path.join(__dirname, `/public/images/${filename}`);
// Store the file in the filesystem.
await new Promise((resolve, reject) => {
// Create a stream to which the upload will be written.
const writeStream = createWriteStream(path);
// When the upload is fully written, resolve the promise.
writeStream.on("finish", resolve);
// If there's an error writing the file, remove the partially written
// file and reject the promise.
writeStream.on("error", (error) => {
unlink(path, () => {
reject(error);
});
});
// In Node.js <= v13, errors are not automatically propagated between
// piped streams. If there is an error receiving the upload, destroy the
// write stream with the corresponding error.
stream.on("error", (error) => writeStream.destroy(error));
// Pipe the upload into the write stream.
stream.pipe(writeStream);
});
return {
url: `http://localhost:4000/images/${filename}`,
};
},
},
};
Note that itโs probably not a good idea to use the filename like that to store the uploaded files, as future uploads with the same filename will overwrite earlier ones. I'm not really sure what will happen if two files with the same name are uploaded at the same time by two clients.
I'm trying to deploy a lambda function allowing me to upload a picture to S3.
The lambda works well in offline but when I'm deploy it to AWS, the function doesn't work.
The first error I encountered was this one :
ERROR (node:7) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
So, I followed the recommendation to use Buffer.from() method instead. But it doesn't work too. The lambda run until the timeout.
Can someone tell me where I was wrong or suggest me another solution ?
Below my lambda function :
const AWS = require("aws-sdk");
const Busboy = require("busboy");
const uuidv4 = require("uuid/v4");
require("dotenv").config();
AWS.config.update({
accessKeyId: process.env.ACCESS_KEY_ID,
secretAccessKey: process.env.SECRET_ACCESS_KEY,
subregion: process.env.SUB_REGION
});
const s3 = new AWS.S3();
const getContentType = event => {
// see the second block of codes
};
const parser = event => {
// see the third block of codes
};
module.exports.main = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
const uuid = uuidv4();
const uploadFile = async (image, uuid) =>
new Promise(() => {
// const bitmap = new Buffer(image, "base64"); // <====== deprecated
const bitmap = Buffer.from(image, "base64"); // <======== problem here
const params = {
Bucket: "my_bucket",
Key: `${uuid}.jpeg`,
ACL: "public-read",
Body: bitmap,
ContentType: "image/jpeg"
};
s3.putObject(params, function(err, data) {
if (err) {
return callback(null, "ERROR");
}
return callback(null, "SUCCESS");
});
});
parser(event).then(() => {
uploadFile(event.body.file, uuid);
});
};
getContentType() :
const getContentType = event => {
const contentType = event.headers["content-type"];
if (!contentType) {
return event.headers["Content-Type"];
}
return contentType;
};
parser()
const parser = event =>
new Promise((resolve, reject) => {
const busboy = new Busboy({
headers: {
"content-type": getContentType(event)
}
});
const result = {};
busboy.on("file", (fieldname, file, filename, encoding, mimetype) => {
file.on("data", data => {
result.file = data;
});
file.on("end", () => {
result.filename = filename;
result.contentType = mimetype;
});
});
busboy.on("field", (fieldname, value) => {
result[fieldname] = value;
});
busboy.on("error", error => reject(error));
busboy.on("finish", () => {
event.body = result;
resolve(event);
});
busboy.write(event.body, event.isBase64Encoded ? "base64" : "binary");
busboy.end();
});
new Buffer(number) // Old
Buffer.alloc(number) // New
new Buffer(string) // Old
Buffer.from(string) // New
new Buffer(string, encoding) // Old
Buffer.from(string, encoding) // New
new Buffer(...arguments) // Old
Buffer.from(...arguments) // New
You are using callbackWaitsForEmptyEventLoop which basically let lambda function thinks that the work is not over yet. Also, you are wrapping it in promise but not resolving it. You can simplify this logic using following inbuilt promise function on aws-sdk
module.exports.main = async event => {
const uuid = uuidv4();
await parser(event); // not sure if this needs to be async or not. check
const bitmap = Buffer.from(event.body.file, "base64"); // <======== problem here
const params = {
Bucket: "my_bucket",
Key: `${uuid}.jpeg`,
ACL: "public-read",
Body: bitmap,
ContentType: "image/jpeg"
};
const response = await s3.putObject(params).promise();
return response;
};
Hello i am creating an express service that prints to a pos printer at http request. The problem is that my solution does not print images on the first print but it does at the second and so on.
I am using https://github.com/song940/node-escpos
Here is my server.js
var print = require('./print')
var express = require('express')
var cors = require('cors')
var bodyParser = require('body-parser');
var app = express();
var corsOptions = {
origin: '*',
optionsSuccessStatus: 200 // some legacy browsers (IE11, various SmartTVs) choke on 204
}
app.get('/print', cors(corsOptions), (req, res) => {
print.demoPrint();
return res.send(req.body);
});
app.listen(3000, () => {
console.log('Express server is started')
});
Here is my print.js
const escpos = require('escpos');
const device = new escpos.USB(0x1504,0x003d);
const options = { encoding: "GB18030"};
const printer = new escpos.Printer(device);
const fs = require('fs')
const printSettings = JSON.parse(fs.readFileSync('printConfig.json', 'utf8'));
exports.demoPrint = () => {
device.open(function() {
printSettings.forEach((currentLine) => {
if(currentLine.printType === "text") {
console.log("PRINTING TEXT");
printer
.font(currentLine.font)
.align(currentLine.align)
.style('bu')
.size(currentLine.size, currentLine.size)
.text(currentLine.text)
}
else if(currentLine.printType === "image") {
escpos.Image.load(currentLine.path, (image) => {
console.log("PRINTING IMAGE");
printer
.align(currentLine.align)
.size(currentLine.size, currentLine.size)
.image(image)
});
}
else if(currentLine.printType === "barcode") {
console.log("PRINTING BARCODE");
printer
.align(currentLine.align)
.barcode(currentLine.text, 'CODE39', {
height: 64,
font: 'B'
});
}
});
printer
.text('\n')
.text('\n')
.cut()
.close();
});
};
Since this hasn't been answered yet I will provide the solution that worked for me. The problem was that the images were starting to load on the first time I submitted a request. By the second time I submitted a request the images were already loaded and were successfully printed.
I solved the problem by a adding a check that would not allow to print until the images were loaded.
I am new to react-admin. I am using react-admin to upload the file. I have following the step mentioned below in tutorial.
But after I submit the request...I see http trace as follow. I see blob link instead of Base64 image payload.
{
"pictures": {
"rawFile": {
"preview": "blob:http://127.0.0.1:3000/fedcd180-cdc4-44df-b8c9-5c7196788dc6"
},
"src": "blob:http://127.0.0.1:3000/fedcd180-cdc4-44df-b8c9-5c7196788dc6",
"title": "Android_robot.png"
}
}
Can someone please advice how to get base64 image payload instead of link?
Check to see if you have this handler, most likely you did not change the name of the resource posts to your:
const addUploadCapabilities = requestHandler => (type, resource, params) => {
if (type === 'UPDATE' && resource === 'posts') {
Create your custom dataProvider to convert picture to base64
import restServerProvider from 'ra-data-json-server';
const servicesHost = 'http://localhost:8080/api';
const dataProvider = restServerProvider(servicesHost);
const myDataProfider = {
...dataProvider,
create: (resource, params) => {
if (resource !== 'your-route' || !params.data.pictures) {
// fallback to the default implementation
return dataProvider.create(resource, params);
}
const myFile = params.data.pictures;
if ( !myFile.rawFile instanceof File ){
return Promise.reject('Error: Not a file...'); // Didn't test this...
}
return Promise.resolve( convertFileToBase64(myFile) )
.then( (picture64) => ({
src: picture64,
title: `${myFile.title}`
}))
.then( transformedMyFile => dataProvider.create(resource, {
...params,
data: {
...params.data,
myFile: transformedMyFile
}
}));
}
};
const convertFileToBase64 = file => new Promise((resolve, reject) => {
const reader = new FileReader();
reader.readAsDataURL(file.rawFile);
reader.onload = () => resolve(reader.result);
reader.onerror = reject;
});
export default myDataProfider;
And get image data at your Server API
exports.create = (req, res) => {
if(req.body.myFile){
var file = req.body.myFile;
var fs = require('fs');
var data = file.src.replace(/^data:image\/\w+;base64,/, "");
var buf = Buffer.from(data, 'base64');
fs.writeFile(`upload/${file.title}`, buf, err => {
if (err) throw err;
console.log('Saved!');
});
}};