Programmatically set active session in Zed Attack Proxy - api

We're working with the zaproxy api and we're trying to set a session to "active" with the setActiveSession() API-call, which is documented here and takes two argument, the site and the session. The problem we're running into is that we keep getting the error:
{
"code": "illegal_parameter",
"message": "Provided parameter has illegal or unrecognized value",
"detail": "session"
}
Assuming we have the following sessions from the sessions() API-call,
{
"sessions": [
{
"session": [
"Session 1",
{
"JSESSIONID": {
"comment": "",
"domain": "localhost",
"domainAttributeSpecified": false,
"expired": false,
"expiryDate": null,
"name": "JSESSIONID",
"path": "/",
"pathAttributeSpecified": false,
"persistent": false,
"secure": false,
"value": "941A60311B3C63C69C5887F531E7090A",
"version": 0
}
},
"16"
]
}
]
}
What value do we need to send in the session field to make this API-call successful? We've tried value field in the "complex" JSESSIONID object, as well as the name "Session 1" and the "16" (under the assumption that it was an id of some sort), in the session array. All of them return the same error.
[Edit] I just saw that zap is logging the following into the terminal, when we make these calls:
1055328 [ZAP-ProxyThread-106] WARN org.zaproxy.zap.extension.api.API - ApiException while handling API request:
Provided parameter has illegal or unrecognized value (illegal_parameter) : session
at org.zaproxy.zap.extension.httpsessions.HttpSessionsAPI.handleApiAction(Unknown Source)
at org.zaproxy.zap.extension.api.API.handleApiRequest(Unknown Source)
at org.parosproxy.paros.core.proxy.ProxyThread.processHttp(Unknown Source)
at org.parosproxy.paros.core.proxy.ProxyThread.run(Unknown Source)
at java.lang.Thread.run(Thread.java:748)

After a bit more trial and error in the API-Browser, we've discovered that the correct value is indeed "Session 1", but we were sending the name in quotations marks, i.e. "Session 1", but the correct way to send it is without them, i.e. Session 1.

Related

how can I manage custom errors coming from the server?

I use Jhipster with react as frontend with loopback as server side, I should show custom error ( ex. tax code already present in the archive).
this is format error
{
"error": {
"statusCode": 422,
"name": "UnprocessableEntityError",
"message": "The request body is invalid. See error object `details` property for more info.",
"code": "VALIDATION_FAILED",
"details": [
{
"path": "partitaIva",
"message": "Partita Iva già presente",
"code": "CUSTOM_ERROR",
"info": {}
}
]
}
}
There could be more errors too, like for a form.
I want to know how to display the error returned by server.

Validation error in aws cloudwatch events rule?

I am triggering my codebuild using codebuild triggers feature with an cron expression cron(*/2 * * * ? *) which triggers for every 2 minutes . Unfortunately, this didn't run after 2 minutes when i checked the cloudwatch show metrics i can see that there were some failedinvocations. To know the cause the of the error i enabled the cloudtrail logs and i can see the error like this
{
"eventVersion": "1.04",
"userIdentity": {
"type": "IAMUser",
"principalId": "xx",
"arn": "arn:aws:iam::xx:user/xx",
"accountId": "xx",
"accessKeyId": "xx",
"userName": "xx",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "true",
"creationDate": "2019-03-04T06:21:22Z"
}
},
"invokedBy": "signin.amazonaws.com"
},
"eventTime": "2019-03-04T09:04:56Z",
"eventSource": "monitoring.amazonaws.com",
"eventName": "DescribeAlarms",
"awsRegion": "ap-south-1",
"sourceIPAddress": "xxx",
"userAgent": "signin.amazonaws.com",
"errorCode": "ValidationException",
"errorMessage": "1 validation error detected: Value 'INVALID_FOR_SUMMARY' at 'stateValue' failed to satisfy constraint: Member must satisfy enum value set: [INSUFFICIENT_DATA, ALARM, OK]",
"requestParameters": {
"stateValue": "INVALID_FOR_SUMMARY"
},
"responseElements": null,
"requestID": "94f3a789-3e5c-11e9-92f8-xxx",
"eventID": "c9ecfca2-a650-4997-b707-xxx",
"eventType": "AwsApiCall",
"recipientAccountId": "xxx"
}
What is this exactly mean 1 validation error detected: Value 'INVALID_FOR_SUMMARY' at 'stateValue' failed to satisfy constraint: Member must satisfy enum value set: [INSUFFICIENT_DATA, ALARM, OK] ?
Does this error is the reason for not triggering my code build ?
Any help is appreciated
Thanks
Do it from amazon command line, it looks like a known issue in AWS. I managed to update mine via the regular CI job of the orchestration.

Creating an event through the ST Developer Portal's API Console

I'm trying to create an event using the API console and keep getting errors. Any ideas why?
I've been using different versions of the example value:
{
"name": "string",
"description": "string",
"status": "string",
"event_id": "string",
"start_epoch": 0,
"end_epoch": 0,
"industry": "string",
"archived": true,
"deleted": true,
"legacy_id": 0,
"is_public": true
}
I get the following back. Any thoughts?
{
"code": "BadRequestError",
"message": "[\"Has time can't be blank\",\"true is not included in the list\"]"
}
You will need to fetch the user/team information first
Once you have your oauth token from above and set to the Authorization header, make a call to https://developer-portal.socialtables.com/api-console#!/Authentication/get_4_0_oauth_token
This will give you the user and team object back to make subsequent calls to make events
Once you have the team_id you can now make events
You can POST to /4.0/events
Swagger doc: https://developer-portal.socialtables.com/api-console#!/Events/post_4_0_events
Example POST payload:
{
"name": "NAME",
"description": "DESCRIPTION",
"status": "new",
"start_epoch": TIME_IN_MS,
"end_epoch": TIME_IN_MS,
"industry": "INDUSTRY_TYPE",
“has_time”: 1 // 0 = all day event, 1 = from/to a specific time in day
}
- This will return the event ID under data.event.id in the response from the above POST
- You can then link the user to:
https://home.socialtables.com/events/EVENT_ID

Error loading file stored in Google Cloud Storage to Big Query

I have been trying to create a job to load a compressed json file from Google Cloud Storage to a Google BigQuery table. I have read/write access in both Google Cloud Storage and Google BigQuery. Also, the uploaded file belongs in the same project as the BigQuery one.
The problem happens when I access to the resource behind this url https://www.googleapis.com/upload/bigquery/v2/projects/NUMERIC_ID/jobs by means of a POST request. The content of the request to the abovementioned resource can be found as follows:
{
"kind" : "bigquery#job",
"projectId" : NUMERIC_ID,
"configuration": {
"load": {
"sourceUris": ["gs://bucket_name/document.json.gz"],
"schema": {
"fields": [
{
"name": "id",
"type": "INTEGER"
},
{
"name": "date",
"type": "TIMESTAMP"
},
{
"name": "user_agent",
"type": "STRING"
},
{
"name": "queried_key",
"type": "STRING"
},
{
"name": "user_country",
"type": "STRING"
},
{
"name": "duration",
"type": "INTEGER"
},
{
"name": "target",
"type": "STRING"
}
]
},
"destinationTable": {
"datasetId": "DATASET_NAME",
"projectId": NUMERIC_ID,
"tableId": "TABLE_ID"
}
}
}
}
However, the error doesn't make any sense and can also be found below:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
],
"code": 400,
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
}
I know the problem doesn't lie either in the project id or in the access token placed in the authentication header, because I have successfully created an empty table before. Also I specify the content-type header to be application/json which I don't think is the issue here, because the body content should be json encoded.
Thanks in advance
Your HTTP request is malformed -- BigQuery doesn't recognize this as a load job at all.
You need to look into the POST request, and check the body you send.
You need to ensure that all the above (which seams correct) is the body of the POST call. The above Json should be on a single line, and if you manually creating the multipart message, make sure there is an extra newline between the headers and body of each MIME type.
If you are using some sort of library, make sure the body is not expected in some other form, like resource, content, or body. I've seen libraries that use these differently.
Try out the BigQuery API explorer: https://developers.google.com/bigquery/docs/reference/v2/jobs/insert and ensure your request body matches the one made by the API.

AWS xray put trace segment command return error

I am trying to send segment doc manually using the CLI with example on this page: https://docs.aws.amazon.com/xray/latest/devguide/xray-api-sendingdata.html#xray-api-segments
I created my own Trace ID and also start and end time.
The command i used are:
> DOC='{"trace_id": "'$TRACE_ID'", "id": "6226467e3f841234", "start_time": 1581596193, "end_time": 1581596198, "name": "test.com"}'
>echo $DOC
{"trace_id": "1-5e453c54-3dc3e03a3c86f97231d06c88", "id": "6226467e3f845502", "start_time": 1581596193, "end_time": 1581596198, "name": "test.com"}
> aws xray put-trace-segments --trace-segment-documents $DOC
{
"UnprocessedTraceSegments": [
{
"ErrorCode": "ParseError",
"Message": "Invalid segment. ErrorCode: ParseError"
},
{
"ErrorCode": "MissingId",
"Message": "Invalid segment. ErrorCode: MissingId"
},
{
"ErrorCode": "MissingId",
"Message": "Invalid segment. ErrorCode: MissingId"
},
.................
The put-trace-segment keep giving me error. The segment doc comply with the JSON schema too. Am i missing something else?
Thanks.
I need to enclose the JSON with "..". The command that works for me was: aws xray put-trace-segments --trace-segment-documents "$DOC"
This is probably due an error in the documentation or that the xray team was using another kind of shell.