How to define a file upload/download endpoint in Apollo / Graphql? - file-upload

I'm wondering what's the best method to get file uplaod/download working in Apollo Graphql
type Query {
getFile: File # <-- Errors out, File isn't recognized as a type
}
type Mutation {
uploadFile(File): Int
}

Related

How to handle JWT authentication with RxDB?

I have a local RxDB database and I want to connect it with CouchDB. Everything seems to works fine except for authentication. I have no idea how to add it differently then inserting credentials in database url:
database.tasks.sync({
remote: `http://${username}:${pass}#127.0.0.1:5984/tododb`,
});
I would like to use JWT auth but can't find how to add a token to sync request. I found only some solutions for PouchDB (pouchdb-authentication plugin) but can't get it working with RxDB.
RxDB is tightly coupled with PouchDB and uses its sync implementation under the hood. To my understanding, the only way to add custom headers to a remote PouchDB instance (which is what is created for you when you pass a url as the remote argument in sync), is to intercept the HTTP request:
var db = new PouchDB('http://example.com/dbname', {
fetch: function (url, opts) {
opts.headers.set('X-Some-Special-Header', 'foo');
return PouchDB.fetch(url, opts);
}
});
PouchDB replication documentation (sync) also states that:
The remoteDB can either be a string or a PouchDB object. If you have a fetch override on a remote database, you will want to use PouchDB objects instead of strings, so that the options are used.
Luckily, RxDB's Rx.Collection.sync does not only accept an server url as the remote argument, but also another RxCollection or a PouchDB-instance.
RxDB even reexport the internally used PouchDB module, so you do not have to install PouchDB as a direct dependency.
import { ..., PouchDB } from 'rxdb';
// ...
const remotePouch = new PouchDB('http://27.0.0.1:5984/tododb', {
fetch: function (url, opts) {
opts.headers.set('Authorization', `Bearer ${getYourJWTToken()}`)
return PouchDB.fetch(url, opts);
}
})
database.tasks.sync({
remote: remotePouch,
});

Enabling binary media types breaks Option POST call (CORS) in AWS Lambda

New to AWS..
We have a .NET Core Microservice running on a serverless aws instance as lambda functions.
Our Controller looks like this
[Route("api/[controller]")]
[ApiController]
public class SomeController : ControllerBase
{
[HttpGet()]
[Route("getsomedoc")]
public async Task<IActionResult> GetSomeDoc()
{
byte[] content;
//UI needs this to process the document
var contentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
contentDisposition.FileName = "File Name";
Response.Headers[HeaderNames.ContentDisposition] = contentDisposition.ToString();
return File(content, "application/octet-stream");
}
[HttpPost()]
[Route("somepost")]
public async Task<IActionResult> SomePost()
{
return null;
}
}
URL's
{{URL}}/getsomedoc
{{URL}}/somepost
We have enabled 'Binary Media Types' in AWS package settings to / for the getsomedoc to work otherwise it was returning the byte array back instead of the file.
But this is breaking our 'somepost' call when UI is accessing the API using
Method: OPTIONS & Access-Control-Request-Method as POST
When we remove the binary media type the 'somepost' starts working.
Looking for suggestions as why this might be happening? and what can we add/remove from gateway to get this fixed.
Well we ended up resolving this in a strange way.
Added two gateways for the lambda
- on one of them have binary enabled
- Disabled on the other one.
For
getsomedoc - Using the one where binary media types are enabled
postsomedoc - Using the other one
Wish there was a better way!!
I have found this same behavior with my API. While looking everywhere for some help, I found a few things that address the issue:
Basically, this bug report says the problem is having CORS enabled while also using the generic Binary Media Type "*/*". Apparently the OPTIONS method gets confused by this. They discuss this in terms of using Serverless, but it should apply to using the console or other ways of interacting with AWS.
They link to a possible solution: you can modify the Integration Response of the OPTIONS method - change the Mapping Template's Content-Type to an actual binary media type, like image/jpeg. They say this allows you to leave the binary media type in Settings as "*/*". This is a little hacky, but at least it is something.
There also was this alternate suggestion in the issues section of this GitHub repo that is a little less hacky. You can set the content handling parameter of the OPTIONS Integration Request to "CONVERT_TO_TEXT"... but you can only do this via CloudFormation or the CLI (not via the console). This is also the recommended solution by some AWS Technicians.
Another possible workaround is to setup a custom Lambda function to handle the OPTIONS request, this way the API gateway may have the "*/*" Binary Media Type.
Create a new lambda function for handling OPTIONS requests:
exports.handler = async (event) => {
const response = {
statusCode: 200,
headers:{
'access-control-allow-origin':'*',
'Access-Control-Allow-Headers': 'access-control-allow-origin, content-type, access-control-allow-methods',
'Access-Control-Allow-Methods':"GET,POST,PUT,DELETE,OPTIONS"
},
body: JSON.stringify("OK")
};
return response;
};
In your API Gateway OPTION method, change the integration type from Mock to Lambda Function.
Make sure to check 'Use Lambda proxy integration'
Select the correct region and point to the created Lambda Function
This way any OPTIONS request made from the browser will trigger the Lambda function and return the custom response.
Be aware this solution might involve costs.

Failed to load Resource : the server ressponded with the status 404(Not Found) in console in Angular 5

This is quiz.service.ts
import { Injectable, } from '#angular/core';
import { HttpClient } from '#angular/common/http';
#Injectable()
export class QuizService
{
readonly rootUrl = 'http://localhost:4200';
constructor(private http : HttpClient)
{
}
insertParticipant(name: string, email: string)
{
var body = {
Name : name,
Email: email
}
return this.http.post(this.rootUrl + '/api/InsertParticipant',body);
}
}
I get this error:
Failed to load Resource : the server responded with the status 404(Not Found) in console in Angular 5
I think its the url issue is there.
My angular version details
Angular CLI : 1.7.4
Node : 12.13.0
OS : win32 * 64
Angular : 5.2.11
I think the url is having the navigation problem.
How to solve the error?
What is the proper way to route the url in angular 5.2?
as I can see you don't have backend and try to send a requests into your .ts file (mayby I'm wrong).
But if it's true, you need a server with backend api to make http requests (it can be a server on your local machine or it can be remote server) and you need a method that will process your request.
Request will be look like this your_server_url/api/insert_participant.
Also you can you use https://www.npmjs.com/package/angular-in-memory-web-api to emulate CRUD operations without servers api. It intercepts Angular Http and HttpClient requests that would otherwise go to the remote server and redirects them to an in-memory data store that you control.

In Ratpack, how can I configure loading configuration from an external file?

I have a Ratpack app written with the Groovy DSL. (Embedded in Java, so not a script.)
I want to load the server's SSL certificates from a config file supplied in the command line options. (The certs will directly embedded in the config, or possibly in a PEM file referenced somewhere in the config.)
For example:
java -jar httpd.jar /etc/app/sslConfig.yml
sslConfig.yml:
---
ssl:
privateKey: file:///etc/app/privateKey.pem
certChain: file:///etc/app/certChain.pem
I seem to have a chicken-and-egg problem using the serverConfig's facilities for reading the config file in order to configure the SslContext later in the serverConfig. The server config isn't created at the point I want to load the SslContext.
To illustrate, the DSL definition I have is something like this:
// SSL Config POJO definition
class SslConfig {
String privateKey
String certChain
SslContext build() { /* ... */ }
}
// ... other declarations here...
Path configPath = Paths.get(args[1]) // get this path from the CLI options
ratpack {
serverConfig {
yaml "/defaultConfig.yaml" // Defaults defined in this resource
yaml configPath // The user-supplied config file
env()
sysProps('genset-server')
require("/ssl", SslConfig) // Map the config to a POJO
ssl sslConfig // HOW DO I GET AN INSTANCE OF that SslConfig POJO HERE?
baseDir BaseDir.find()
}
handlers {
get { // ...
}
}
}
Possibly there is a solution to this (loading the SSL context in a later block?)
Or possibly just a better way to go about the whole thing..?
You could create a separate ConfigDataBuilder to load up a config object to deserialize your ssl config.
Alternatively, you can bind directly to server.ssl. All of the ServerConfig properties bind to the server space within the config.
The solution I am currently using is this, with an addition of a #builder method to SslConfig which returns a SslContextBuilder defined using its other fields.
ratpack {
serverConfig {
// Defaults defined in this resource
yaml RatpackEntryPoint.getResource("/defaultConfig.yaml")
// Optionally load the config path passed via the configFile parameter (if not null)
switch (configPath) {
case ~/.*[.]ya?ml/: yaml configPath; break
case ~/.*[.]json/: json configPath; break
case ~/.*[.]properties/: props configPath; break
}
env()
sysProps('genset-server')
require("/ssl", SslConfig) // Map the config to a POJO
baseDir BaseDir.find()
// This is the important change.
// It apparently needs to come last, because it prevents
// later config directives working without errors
ssl build().getAsConfigObject('/ssl',SslConfig).object.builder().build()
}
handlers {
get { // ...
}
}
}
Essentially this performs an extra build of the ServerConfig in order to redefine the input to the second build, but it works.

how to mock long json response via dojo registry?

I'm trying to follow this article to mock some response.
I'm porting mocked data from existing mocking service. There are some really long json responses, such as:
"{\"Layout\":{\"Id\":\
.......
"Image1\":\"test.png\",\"Image2\":\"\",\"multi\":[\"test1\",\"test2\",\"test3\"]}}"
There are a few hundred lines in the "......". Is there a easy way of doing this? Can I load the response from a file when I register the mock response?
It should be trivial to set up a mock provider to respond with data from separate local static files. Just return the result of sending an XHR (with dojo/request/xhr) to the desired static resource.
The following example assumes you have a static JSON resource in a path relative to your mock provider module:
define([
'require',
'dojo/request/registry',
'dojo/request/xhr'
], function (require, registry, xhr) {
registry.register('/service', function (url, options) {
// Presuming you're already passing handleAs: 'json' in options anyway,
// you can just pass options to the xhr call as-is.
return xhr(require.toUrl('./data/sample.json'), options);
});
...
return registry;
});