I am using Nest with the following connection settings:
var connectionPool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var settings = new ConnectionSettings(connectionPool, new InMemoryConnection());
settings.DisableDirectStreaming(true); // needed to see good looking debug log on insert
settings.DefaultIndex(Index);
Client = new ElasticClient(settings);
With new InMemoryConnection() I hope to query with Nest - changing data inside an Azure Cloud function.
Strangely the debug logs look promising Indexing:
/*
var res = await Client.IndexManyAsync(response.Elements, Index); //
Console.WriteLine(res.DebugInformation);
*/
/*
var res = await Client.IndexAsync(response, i => i.Index(Index)); // Index = "data"
Console.WriteLine(res.DebugInformation); // <--
*/
And logging directly after the insertions the count is 0:
// var anyDocs = await Client.CountAsync<OverpassElement>(c => c.Index(Index));
var anyDocs = await Client.CountAsync<OverpassElement>(c => c);
Console.WriteLine("count: " + anyDocs.Count);
..but the entire json data being logged with the insertion.
How come i can't count it (so that I can search in a next step), after insertion?
Actually I get:
Invalid NEST response built from a successful (200) low level call on POST: /data/_doc
And there is 0 Items in the on the IndexResponse inserting.
The data is of Element looking like the following part of an array containing 4221 such items:
{
"type": "relation",
"id": 8353694,
"timestamp": "2018-06-04T22:54:27Z",
"version": 1,
"changeset": 59551528,
"user": "asdf2",
"uid": 1416503,
"members": [
{
"type": "way",
"ref": 89956942,
"role": "from"
},
{
"type": "node",
"ref": 1042756547,
"role": "via"
},
{
"type": "way",
"ref": 89956938,
"role": "to"
}
],
"tags": {
"restriction": "no_left_turn",
"type": "restriction"
}
},
ElasticSearch has many similarities to a NoSql data store. In this case, "read after write" is not guaranteed by default. When the index API call returns success, it doesn't mean "this document is now available for searching"; it means "ElasticSearch has accepted your document and it will be available for searching shortly". ElasticSearch uses eventual consistency by default.
However, this can be annoying during testing. So ElasticSearch has a Refresh API that essentially just blocks until all documents already indexed are available for searching. I strongly recommend that you do not call this in production; only in test code.
As the risk of reviving an old question, this answer from Russ Cam explains that InMemoryConnection does not actually run the operation against Elasticsearch.
InMemoryConnection doesn't actually send any requests or receive any responses from Elasticsearch; used in conjunction with .SetConnectionStatusHandler() on Connection settings (or .OnRequestCompleted() in NEST 2.x+), it's a convenient way to see the serialized form of requests.
So you can inspect the query that NEST generates from your code but you won't be able to observe the results.
I don`t know what Nest is, but I'd bet 100$ that if it use Transactional concepts, maybe you should commit it in order to see count correctly ?
I have Tried
I have tried this code
`# Type queries into this side of the screen, and you will
# see intelligent typeaheads aware of the current GraphQL type schema,
# live syntax, and validation errors highlighted within the text.
# We'll get you started with a simple query showing your username!
query {
securityAdvisories(orderBy: {field: PUBLISHED_AT, direction: DESC}, first: 2) {
nodes {
description
ghsaId
summary
publishedAt
}
}
}
And got the below response
{
"data": {
"securityAdvisories": {
"nodes": [
{
"description": "In Symfony before 2.7.51, 2.8.x before 2.8.50, 3.x before 3.4.26, 4.x before 4.1.12, and 4.2.x before 4.2.7, when service ids allow user input, this could allow for SQL Injection and remote code execution. This is related to symfony/dependency-injection.",
"ghsaId": "GHSA-pgwj-prpq-jpc2",
"summary": "Critical severity vulnerability that affects symfony/dependency-injection",
"publishedAt": "2019-11-18T17:27:31Z"
},
{
"description": "Tapestry processes assets `/assets/ctx` using classes chain `StaticFilesFilter -> AssetDispatcher -> ContextResource`, which doesn't filter the character `\\`, so attacker can perform a path traversal attack to read any files on Windows platform.",
"ghsaId": "GHSA-89r3-rcpj-h7w6",
"summary": "Moderate severity vulnerability that affects org.apache.tapestry:tapestry-core",
"publishedAt": "2019-11-18T17:19:03Z"
}
]
}
}
}
But i want to get the response for specific security advisory like this
i.e i want to get graphql response for specific id for below example url ID is GHSA-wmx6-vxcf-c3gr
Thanks!
The simplest way would be to use the securityAdvisory() query.
query {
securityAdvisory(ghsaId: "GHSA-wmx6-vxcf-c3gr") {
ghsaId
summary
}
}
If you need to use the securityAdvisories() query for some reason, you simply have to add an identifier:. The following query should get the distinct entry for GHSA-wmx6-vxcf-c3gr.
query {
securityAdvisory(ghsaId: "GHSA-wmx6-vxcf-c3gr") {
ghsaId
summary
}
}
Using GitHub GraphQL API (v.4) I would like to get all the branch names existing on a given repository.
My attempt
{
repository(name: "my-repository", owner: "my-account") {
... on Ref {
name
}
}
}
returns error:
{'data': None, 'errors': [{'message': "Fragment on Ref can't be spread inside Repository", 'locations': [{'line': 4, 'column': 13}]}]}
Here's how to retrieve 10 branches from a repo:
{
repository(name: "git-point", owner: "gitpoint") {
refs(first: 10, , refPrefix:"refs/heads/") {
nodes {
name
}
}
}
}
PS: You usually use spread when dealing with an Union type (like IssueTimeline for example, which is composed of different kind of objects, so you can spread on a particular object type to query specific fields.
You might need to use pagination to get all branches
I am using spring data mongodb sdk to query mongo db.
The document in mongoDb looks like this:
{
"data": {
"suggestions":[
{
"key": "take",
"value": 1
},
{
"key": "donttake",
"value": 0
}
]
}
}
In my api request I have a structure similar to "suggestions" element above.
I want to create a query criteria where "is" clause should be the value of "suggestions" element in the api request.
I tried the following code using spring data mongo db:
JsonParser jsonParser = new JsonParser();
ObjectMapper objMapper = new ObjectMapper();
String jsonArrayString = objMapper.writeValueAsString(apirequest.getSuggestions());
JsonArray arrayFromString = jsonParser.parse(jsonArrayString).getAsJsonArray();
criteria = Criteria.where("data.suggestions").is(arrayFromString);
The problem with this code is that when I debug and see the query that gets created using criteria above, I goes in as $java: [{"key": "take", "value": 1}]
Therefore, it can't match it with the mongo document and doesn't fetch me any result.
Is there another way to query and array of documents in mongodb from spring data mongo ?
I followed a completely different approach by reading some information on querying arrays in mongodb available at
https://docs.mongodb.com/manual/tutorial/query-array-of-documents/
I used elemMatch to solve this problem as follows:
Let's say my API request gets mapped to and object suggestions and keyVal is an object that stores the keyVal pair.
for (KeyVal keyVal: suggestions()){
Criteria c =
Criteria.where("key").is(keyVal.key()).and("value").is(keyVal.value());
criteria = Criteria.where("data.suggestions").elemMatch(c);
}
Then criteria can be used in a mongo Query
Also, keep in mind that elemMatch doesn't care about the ordering of elements inside a document in an array. So that way, elemMatch solves the purpose well.
I have an AWS api that proxies lamba functions. I currently use different endpoints with separate lambda functions:
api.com/getData --> getData
api.com/addData --> addData
api.com/signUp --> signUp
The process to manage all the endpoints and functions becomes cumbersome. Is there any disadvantage when I use a single endpoint to one lambda function which decides what to do based on the query string?
api.com/exec&func=getData --> exec --> if(params.func === 'getData') { ... }
It's perfectly valid to map multiple methods to a single lambda function and many people are using this methodology today as opposed to creating an api gateway resource and lambda function for each discrete method.
You might consider proxying all requests to a single function. Take a look at the following documentation on creating an API Gateway => Lambda proxy integration:
http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-set-up-simple-proxy.html
Their example is great here. A request like the following:
POST /testStage/hello/world?name=me HTTP/1.1
Host: gy415nuibc.execute-api.us-east-1.amazonaws.com
Content-Type: application/json
headerName: headerValue
{
"a": 1
}
Will wind up sending the following event data to your AWS Lambda function:
{
"message": "Hello me!",
"input": {
"resource": "/{proxy+}",
"path": "/hello/world",
"httpMethod": "POST",
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"cache-control": "no-cache",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "US",
"Content-Type": "application/json",
"headerName": "headerValue",
"Host": "gy415nuibc.execute-api.us-east-1.amazonaws.com",
"Postman-Token": "9f583ef0-ed83-4a38-aef3-eb9ce3f7a57f",
"User-Agent": "PostmanRuntime/2.4.5",
"Via": "1.1 d98420743a69852491bbdea73f7680bd.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "pn-PWIJc6thYnZm5P0NMgOUglL1DYtl0gdeJky8tqsg8iS_sgsKD1A==",
"X-Forwarded-For": "54.240.196.186, 54.182.214.83",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"queryStringParameters": {
"name": "me"
},
"pathParameters": {
"proxy": "hello/world"
},
"stageVariables": {
"stageVariableName": "stageVariableValue"
},
"requestContext": {
"accountId": "12345678912",
"resourceId": "roq9wj",
"stage": "testStage",
"requestId": "deef4878-7910-11e6-8f14-25afc3e9ae33",
"identity": {
"cognitoIdentityPoolId": null,
"accountId": null,
"cognitoIdentityId": null,
"caller": null,
"apiKey": null,
"sourceIp": "192.168.196.186",
"cognitoAuthenticationType": null,
"cognitoAuthenticationProvider": null,
"userArn": null,
"userAgent": "PostmanRuntime/2.4.5",
"user": null
},
"resourcePath": "/{proxy+}",
"httpMethod": "POST",
"apiId": "gy415nuibc"
},
"body": "{\r\n\t\"a\": 1\r\n}",
"isBase64Encoded": false
}
}
Now you have access to all headers, url params, body etc. and you could use that to handle requests differently in a single Lambda function (basically implementing your own routing).
As an opinion I see some advantages and disadvantages to this approach. Many of them depend on your specific use case:
Deployment: if each lambda function is discrete then you can deploy them independently, which might reduce the risk from code changes (microservices strategy). Conversely you may find that needing to deploy functions separately adds complexity and is burdensome.
Self Description: API Gateway's interface makes it extremely intuitive to see the layout of your RESTful endpoints -- the nouns and verbs are all visible at a glance. Implementing your own routing could come at the expense of this visibility.
Lambda sizing and limits: If you proxy all -- then you'll wind up needing to choose an instance size, timeout etc. that will accommodate all of your RESTful endpoints. If you create discrete functions then you can more carefully choose the memory footprint, timeout, deadletter behavior etc. that best meets the needs of the specific invocation.
I would have commented to just add a couple of points to Dave Maple's great answer but I don't have enough reputation points yet so I'll add the comments here.
I started to head down the path of multiple endpoints pointing to one Lambda function that could treat each endpoint different by accessing the 'resource' property of the Event. After trying it I have now separated them into separate functions for the reasons that Dave suggested plus:
I find it easier to go through logs and monitors when the functions are separated.
One nuance that as a beginner I didn't pick up on at first is that you can have one code base and deploy the exact same code as multiple Lambda functions. This allows you to have the benefits of function separation and the benefits of a consolidated approach in your code base.
You can use the AWS CLI to automate tasks across the multiple functions to reduce/eliminate the downside of managing separate functions. For example, I have a script that updates 10 functions with the same code.
i've been building 5~6 microservices with Lambda-API Gateway, and been through several try & failure and success.
in short, from my experiences, it's better to delegate all the API calls to lambda with just one APIGateway wildcard mapping, such as
/api/{proxy+} -> Lambda
if you ever used any frameworks like grape you know that when making APIs, features like
"middleware"
"global exception handling"
"cascade routing"
"parameter validation"
are really crucial.
as your API grows, it's almost impossible to manage all the routes with API Gateway mapping, nor API Gateway support non of those feature also.
further more, it's not really practically to break lambda for each endpoints for development or deployment.
from your example,
api.com/getData --> getData
api.com/addData --> addData
api.com/signUp --> signUp
imagine you have data ORM, User authentication logic, common view file (such as data.erb).. then how you gonna share that?
you might can break like,
api/auth/{+proxy} -> AuthServiceLambda
api/data/{+proxy} -> DataServiceLambda
but not like "per endpoint". you might can lookup concept of microservice and best practice about how you can split the service
for those web framework like features, checkout this we just built web framework for lambda since i needed this at my company.
A similar scenario is adressed in the official AWS blogpost named Best practices for organizing larger serverless applications.
The general recommendation is to split "monolithic lambdas" into separate lambdas and move the routing to the API Gateway.
This is what the blog writes about the "monolithic lambda" approach:
This approach is generally unnecessary, and it’s often better to take
advantage of the native routing functionality available in API
Gateway.
...
API Gateway is also capable of validating parameters, reducing the
need for checking parameters with custom code. It can also provide
protection against unauthorized access, and a range of other features
more suited to be handled at the service level.
Going from this:
To this
The responsibility of mapping API requests to Lambda in AWS is handled through a Gateway's API specification.
Mapping of URL paths and HTTP methods as well as data validation SHOULD be left up to the Gateway. There is also the question of permissions and API scope; you'll not be able to leverage API scopes and IAM permission levels in a normal way.
In terms of coding, to replicate this mechanism inside of a Lambda handler is an anti-pattern. Going down that route one will soon end up with something that looks like the routing for a node express server, not a Lambda function.
After having set up 50+ Lambdas behind API Gateway I can say that
function handlers should be kept as dump as possible, allowing them to be reusable independent from the context from which they're being invoked.
As far as I know, AWS allows only one handler per Lambda function. That’s why I have created a little "routing" mechanism with Java Generics (for stronger type checks at compile time). In the following example you can call multiple methods and pass different object types to the Lambda and back via one Lambda handler:
Lambda class with handler:
public class GenericLambda implements RequestHandler<LambdaRequest<?>, LambdaResponse<?>> {
#Override
public LambdaResponse<?> handleRequest(LambdaRequest<?> lambdaRequest, Context context) {
switch (lambdaRequest.getMethod()) {
case WARMUP:
context.getLogger().log("Warmup");
LambdaResponse<String> lambdaResponseWarmup = new LambdaResponse<String>();
lambdaResponseWarmup.setResponseStatus(LambdaResponse.ResponseStatus.IN_PROGRESS);
return lambdaResponseWarmup;
case CREATE:
User user = (User)lambdaRequest.getData();
context.getLogger().log("insert user with name: " + user.getName()); //insert user in db
LambdaResponse<String> lambdaResponseCreate = new LambdaResponse<String>();
lambdaResponseCreate.setResponseStatus(LambdaResponse.ResponseStatus.COMPLETE);
return lambdaResponseCreate;
case READ:
context.getLogger().log("read user with id: " + (Integer)lambdaRequest.getData());
user = new User(); //create user object for test, instead of read from db
user.setName("name");
LambdaResponse<User> lambdaResponseRead = new LambdaResponse<User>();
lambdaResponseRead.setData(user);
lambdaResponseRead.setResponseStatus(LambdaResponse.ResponseStatus.COMPLETE);
return lambdaResponseRead;
default:
LambdaResponse<String> lambdaResponseIgnore = new LambdaResponse<String>();
lambdaResponseIgnore.setResponseStatus(LambdaResponse.ResponseStatus.IGNORED);
return lambdaResponseIgnore;
}
}
}
LambdaRequest class:
public class LambdaRequest<T> {
private Method method;
private T data;
private int languageID;
public static enum Method {
WARMUP, CREATE, READ, UPDATE, DELETE
}
public LambdaRequest(){
}
public Method getMethod() {
return method;
}
public void setMethod(Method create) {
this.method = create;
}
public T getData() {
return data;
}
public void setData(T data) {
this.data = data;
}
public int getLanguageID() {
return languageID;
}
public void setLanguageID(int languageID) {
this.languageID = languageID;
}
}
LambdaResponse class:
public class LambdaResponse<T> {
private ResponseStatus responseStatus;
private T data;
private String errorMessage;
public LambdaResponse(){
}
public static enum ResponseStatus {
IGNORED, IN_PROGRESS, COMPLETE, ERROR, COMPLETE_DUPLICATE
}
public ResponseStatus getResponseStatus() {
return responseStatus;
}
public void setResponseStatus(ResponseStatus responseStatus) {
this.responseStatus = responseStatus;
}
public T getData() {
return data;
}
public void setData(T data) {
this.data = data;
}
public String getErrorMessage() {
return errorMessage;
}
public void setErrorMessage(String errorMessage) {
this.errorMessage = errorMessage;
}
}
Example POJO User class:
public class User {
private String name;
public User() {
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
JUnit test method:
#Test
public void GenericLambda() {
GenericLambda handler = new GenericLambda();
Context ctx = createContext();
//test WARMUP
LambdaRequest<String> lambdaRequestWarmup = new LambdaRequest<String>();
lambdaRequestWarmup.setMethod(LambdaRequest.Method.WARMUP);
LambdaResponse<String> lambdaResponseWarmup = (LambdaResponse<String>) handler.handleRequest(lambdaRequestWarmup, ctx);
//test READ user
LambdaRequest<Integer> lambdaRequestRead = new LambdaRequest<Integer>();
lambdaRequestRead.setData(1); //db id
lambdaRequestRead.setMethod(LambdaRequest.Method.READ);
LambdaResponse<User> lambdaResponseRead = (LambdaResponse<User>) handler.handleRequest(lambdaRequestRead, ctx);
}
ps.: if you have deserialisation problems (LinkedTreeMap cannot be cast to ...) in you Lambda function (because uf the Generics/Gson), use the following statement:
YourObject yourObject = (YourObject)convertLambdaRequestData2Object(lambdaRequest, YourObject.class);
Method:
private <T> Object convertLambdaRequestData2Object(LambdaRequest<?> lambdaRequest, Class<T> clazz) {
Gson gson = new Gson();
String json = gson.toJson(lambdaRequest.getData());
return gson.fromJson(json, clazz);
}
The way I see, choosing single vs multiple API is a function of following considerations:
Security: I think this is the biggest challenge of having a single API structure. It may be possible to have different security profile for different parts of the requirement
Think microservice model from business perspective:
The whole purpose of any API should be serving some requests, hence it must be well understood and easy to use. So related APIs should be combined. For example, if you have a mobile client and it requires 10 things to be pulled in and out from DB, it makes sense to have 10 endpoints into a single API.
But this should be within reason and should be seen in context of overall solution design. For example, if you design a payroll product, you may think to have separate modules for leave management and user details management. Even if they are often used by a single client, they should still be different API, because their business meaning is different.
Reusability: Applies to both code and functionality reusability. Code reusability is a easier problem to solve, ie build common modules for shared requirements and build them as libraries.
Functionality reusability is harder to solve. In my mind, most of the cases can be solved by redesigning the way endpoints/functions are laid out, because if you need duplication of functionality that means your initial design is not detailed enough.
Just found a link in another SO post which summarizes better