ZAP: Exploring APIs - how to set header parameters in UI? - zap

I'm evaluating ZAP and followed the guide: https://zaproxy.blogspot.ru/2017/04/exploring-apis-with-zap.html
But cannot find a way how to set header parameters in the UI, for example api key.

You could use the Replacer addon: https://www.zaproxy.org/docs/desktop/addons/replacer/
Or a script. Here's a python scripting example:
headers = dict({"X-MIP-ACCESS-TOKEN": "XXXXXxXX-xxXX-XXXx-xxxX-XXxxXxXXxXxX",
"X-MIP-CHANNEL": "ANDROID",
"X-MIP-Device-Id": "1",
"X-MIP-APP-VERSION": "1.0.1",
"X-MIP-APP-VERSION-ID": "1"});
def sendingRequest(msg, initiator, helper):
for x in list(headers):
msg.getRequestHeader().setHeader(x, headers[x]);
def responseReceived(msg, initiator, helper):
pass;
You can find other examples in the community-scripts repo: https://github.com/zaproxy/community-scripts
You can get the Replacer add-on or python scripting add-on via the ZAP Marketplace:

Related

Is there a way to automate this Python script in GCP?

I am a complete beginner in using GCP functions/products.
I have written the following code below, that takes a list of cities from a local folder, and call in weather data for each city in that list, eventually uploading those weather values into a table in BigQuery. I don't need to change the code anymore, as it creates new tables when a new week begins, now I would want to "deploy" (I am not even sure if this is called deploying a code) in the cloud for it to automatically run there. I tried using App Engine and Cloud Functions but faced issues in both places.
import requests, json, sqlite3, os, csv, datetime, re
from google.cloud import bigquery
#from google.cloud import storage
list_city = []
with open("list_of_cities.txt", "r") as pointer:
for line in pointer:
list_city.append(line.strip())
API_key = "PLACEHOLDER"
Base_URL = "http://api.weatherapi.com/v1/history.json?key="
yday = datetime.date.today() - datetime.timedelta(days = 1)
Date = yday.strftime("%Y-%m-%d")
table_id = f"sonic-cat-315013.weather_data.Historical_Weather_{yday.isocalendar()[0]}_{yday.isocalendar()[1]}"
credentials_path = r"PATH_TO_JSON_FILE"
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = credentials_path
client = bigquery.Client()
try:
schema = [
bigquery.SchemaField("city", "STRING", mode="REQUIRED"),
bigquery.SchemaField("Date", "Date", mode="REQUIRED"),
bigquery.SchemaField("Hour", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("Temperature", "FLOAT", mode="REQUIRED"),
bigquery.SchemaField("Humidity", "FLOAT", mode="REQUIRED"),
bigquery.SchemaField("Condition", "STRING", mode="REQUIRED"),
bigquery.SchemaField("Chance_of_rain", "FLOAT", mode="REQUIRED"),
bigquery.SchemaField("Precipitation_mm", "FLOAT", mode="REQUIRED"),
bigquery.SchemaField("Cloud_coverage", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("Visibility_km", "FLOAT", mode="REQUIRED")
]
table = bigquery.Table(table_id, schema=schema)
table.time_partitioning = bigquery.TimePartitioning(
type_=bigquery.TimePartitioningType.DAY,
field="Date", # name of column to use for partitioning
)
table = client.create_table(table) # Make an API request.
print(
"Created table {}.{}.{}".format(table.project, table.dataset_id, table.table_id)
)
except:
print("Table {}_{} already exists".format(yday.isocalendar()[0], yday.isocalendar()[1]))
def get_weather():
try:
x["location"]
except:
print(f"API could not call city {city_name}")
global day, time, dailytemp, dailyhum, dailycond, chance_rain, Precipitation, Cloud_coverage, Visibility_km
day = []
time = []
dailytemp = []
dailyhum = []
dailycond = []
chance_rain = []
Precipitation = []
Cloud_coverage = []
Visibility_km = []
for i in range(24):
dayval = re.search("^\S*\s" ,x["forecast"]["forecastday"][0]["hour"][i]["time"])
timeval = re.search("\s(.*)" ,x["forecast"]["forecastday"][0]["hour"][i]["time"])
day.append(dayval.group()[:-1])
time.append(timeval.group()[1:])
dailytemp.append(x["forecast"]["forecastday"][0]["hour"][i]["temp_c"])
dailyhum.append(x["forecast"]["forecastday"][0]["hour"][i]["humidity"])
dailycond.append(x["forecast"]["forecastday"][0]["hour"][i]["condition"]["text"])
chance_rain.append(x["forecast"]["forecastday"][0]["hour"][i]["chance_of_rain"])
Precipitation.append(x["forecast"]["forecastday"][0]["hour"][i]["precip_mm"])
Cloud_coverage.append(x["forecast"]["forecastday"][0]["hour"][i]["cloud"])
Visibility_km.append(x["forecast"]["forecastday"][0]["hour"][i]["vis_km"])
for i in range(len(time)):
time[i] = int(time[i][:2])
def main():
i = 0
while i < len(list_city):
try:
global city_name
city_name = list_city[i]
complete_URL = Base_URL + API_key + "&q=" + city_name + "&dt=" + Date
response = requests.get(complete_URL, timeout = 10)
global x
x = response.json()
get_weather()
table = client.get_table(table_id)
varlist = []
for j in range(24):
variables = city_name, day[j], time[j], dailytemp[j], dailyhum[j], dailycond[j], chance_rain[j], Precipitation[j], Cloud_coverage[j], Visibility_km[j]
varlist.append(variables)
client.insert_rows(table, varlist)
print(f"City {city_name}, ({i+1} out of {len(list_city)}) successfully inserted")
i += 1
except Exception as e:
print(e)
continue
In the code, there is direct reference to two files that is located locally, one is the list of cities and the other is the JSON file containing the credentials to access my project in GCP. I believed that uploading these files in Cloud Storage and referencing them there won't be an issue, but then I realised that I can't actually access my Buckets in Cloud Storage without using the credential files.
This leads me to being unsure whether the entire process would be possible at all, how do I authenticate in the first place from the cloud, if I need to reference that first locally? Seems like an endless circle, where I'd authenticate from the file in Cloud Storage, but I'd need authentication first to access that file.
I'd really appreciate some help here, I have no idea where to go from this, and I also don't have great knowledge in SE/CS, I only know Python R and SQL.
For Cloud Functions, the deployed function will run with the project service account credentials by default, without needing a separate credentials file. Just make sure this service account is granted access to whatever resources it will be trying to access.
You can read more info about this approach here (along with options for using a different service account if you desire): https://cloud.google.com/functions/docs/securing/function-identity
This approach is very easy, and keeps you from having to deal with a credentials file at all on the server. Note that you should remove the os.environ line, as it's unneeded. The BigQuery client will use the default credentials as noted above.
If you want the code to run the same whether on your local machine or deployed to the cloud, simply set a "GOOGLE_APPLICATION_CREDENTIALS" environment variable permanently in the OS on your machine. This is similar to what you're doing in the code you posted; however, you're temporarily setting it every time using os.environ rather than permanently setting the environment variable on your machine. The os.environ call only sets that environment variable for that one process execution.
If for some reason you don't want to use the default service account approach outlined above, you can instead directly reference it when you instantiate the bigquery.Client()
https://cloud.google.com/bigquery/docs/authentication/service-account-file
You just need to package the credential file with your code (i.e. in the same folder as your main.py file), and deploy it alongside so it's in the execution environment. In that case, it is referenceable/loadable from your script without needing any special permissions or credentials. Just provide the relative path to the file (i.e. assuming you have it in the same directory as your python script, just reference only the filename)
There may be different flavors and options to deploy your application and these will depend on your application semantics and execution constraints.
It will be too hard to cover all of them and the official Google Cloud Platform documentation cover all of them in great details:
Google Compute Engine
Google Kubernetes Engine
Google App Engine
Google Cloud Functions
Google Cloud Run
Based on my understanding of your application design, the most suitable ones would be:
Google App Engine
Google Cloud Functions
Google Cloud Run: Check these criteria to see if you application is a good fit for this deployment style
I would suggest using Cloud Functions as you deployment option in which case your application will default to using the project App Engine service account to authenticate itself and perform allowed actions. Hence, you should only check if the default account PROJECT_ID#appspot.gserviceaccount.com under the IAM configuration section has proper access to needed APIs (BigQuery in your case).
In such a setup, you want need to push your service account key to Cloud Storage which I would recommend to avoid in either cases, and you want need to pull it either as the runtime will handle authentication the function for you.

How to get SSML <mark> timestamps from Google Cloud text-to-speech API

I want to use SSML markers through the Google Cloud text-to-speech API to request the timing of these markers in the audio stream. These timestamps are necessary in order to provide cues for effects, word/section highlighting and feedback to the user.
I found this question which is relevant, although the question refers to the timestamps for each word and not the SSML <mark> tag.
The following API request returns OK but shows the lack of the requested marker data. This is using the Cloud Text-to-Speech API v1.
{
"voice": {
"languageCode": "en-US"
},
"input": {
"ssml": "<speak>First, <mark name=\"a\"/> second, <mark name=\"b\"/> third.</speak>"
},
"audioConfig": {
"audioEncoding": "mp3"
}
}
Response:
{
"audioContent":"//NExAAAAANIAAAAABcFAThYGJqMWA..."
}
Which only provides the synthesized audio without any contextual information.
Is there an API request that I am overlooking which can expose information about these markers such as is the case with IBM Watson and Amazon Polly?
Looks like this is supported in Cloud Text-to-Speech API v1beta1: https://cloud.google.com/text-to-speech/docs/reference/rest/v1beta1/text/synthesize#TimepointType
You can use https://texttospeech.googleapis.com/v1beta1/text:synthesize. Set TimepointType to SSML_MARK. If this field is not set, timepoints are not returned by default.
At the time of writing, the timepoint data is available in the v1beta1 release of Google cloud text-to-speech.
I didn't need to sign on to any extra developer program in order to access the beta, beyond the default access.
Importing in Python (for example) went from:
from google.cloud import texttospeech as tts
to:
from google.cloud import texttospeech_v1beta1 as tts
Nice and simple.
I needed to modify the default way I was sending the synthesis request to include the enable_time_pointing flag.
I found that with a mix of poking around the machine-readable API description here and reading the Python library code, which I had already downloaded.
Thankfully, the source in the generally available version also includes the v1beta version - thank you Google!
I've put a runnable sample below. Running this needs the same auth and setup you'll need for a general text-to-speech sample, which you can get by following the official documentation.
Here's what it does for me (with slight formatting for readability):
$ python tools/try-marks.py
Marks content written to file: .../demo.json
Audio content written to file: .../demo.mp3
$ cat demo.json
[
{"sec": 0.4300000071525574, "name": "here"},
{"sec": 0.9234582781791687, "name": "there"}
]
Here's the sample:
import json
from pathlib import Path
from google.cloud import texttospeech_v1beta1 as tts
def go_ssml(basename: Path, ssml):
client = tts.TextToSpeechClient()
voice = tts.VoiceSelectionParams(
language_code="en-AU",
name="en-AU-Wavenet-B",
ssml_gender=tts.SsmlVoiceGender.MALE,
)
response = client.synthesize_speech(
request=tts.SynthesizeSpeechRequest(
input=tts.SynthesisInput(ssml=ssml),
voice=voice,
audio_config=tts.AudioConfig(audio_encoding=tts.AudioEncoding.MP3),
enable_time_pointing=[
tts.SynthesizeSpeechRequest.TimepointType.SSML_MARK]
)
)
# cheesy conversion of array of Timepoint proto.Message objects into plain-old data
marks = [dict(sec=t.time_seconds, name=t.mark_name)
for t in response.timepoints]
name = basename.with_suffix('.json')
with name.open('w') as out:
json.dump(marks, out)
print(f'Marks content written to file: {name}')
name = basename.with_suffix('.mp3')
with name.open('wb') as out:
out.write(response.audio_content)
print(f'Audio content written to file: {name}')
go_ssml(Path.cwd() / 'demo', """
<speak>
Go from <mark name="here"/> here, to <mark name="there"/> there!
</speak>
""")

How to use setBodyContent(HttpBodyContent) in Katalon

I have been trying to update our test management API once the Katalon test has completed running.
We are using Adaptavist Test Management in JIRA. I am not trying to update the Katalon JIRA add-on by the way.
The API call, for Adaptavist, needs to be a POST and have a body message of items like the example {"projectKey": "FVS", "testCaseKey": "FVS-T1", "status": "Pass", "environment": "DEV"}
I would eventually replace these items with the Katalon test result variables as appropriate.
I have created a Service Call in the Object Repository which deals with auth settings, this works fine if I test the request in the editor with these sample values.
When I come to add the script in the Test Case itself I am struggling to get it to work, let alone replace the variables with the actual values.
I current have this :
//run test
WebUI.openBrowser('')
WebUI.navigateToUrl(GlobalVariable.MainURL)
WebUI.verifyElementClickable(findTestObject('img_img-responsive_1'))
WebUI.verifyElementClickable(findTestObject('img_img-responsive_2'))
WebUI.verifyElementClickable(findTestObject('img_img-responsive_3'))
WebUI.closeBrowser()
//update JIRA
RequestObject getJIRAUpdateObject = (RequestObject)findTestObject('Web Service
Calls/Update JIRA')
String vsRequestBody = '{"projectKey": "FVS", "testCaseKey": "FVS-T1",
"status": "Pass", "environment": "DEV"}';
body = getJIRAUpdateObject.setHttpBody(vsRequestBody)
WS.sendRequest(getJIRAUpdateObject)
I also have the following additional imports
import com.kms.katalon.core.testobject.ResponseObject
import com.kms.katalon.core.testobject.RequestObject
Now in the script editor, I am told that setHttpBody is now depreciated in Katalon version 5.4+ (I am using 5.4.1) and I should use setBodyContent(HttpBodyContent) instead, but when I look at the API documentation for this, I cannot work out the syntax of how I am supposed to use this instead.
Does anyone know how I should change the code, or have examples of how I need to change the above code to use this new method ??
Any help is much appreciated.
As answered on the Katalon forum:
In your case, the body content is text body then the suitable implementation should be:
import com.kms.katalon.core.testobject.impl.HttpTextBodyContent //for text in body
import com.kms.katalon.core.testobject.impl.HttpFileBodyContent //for file in body
import com.kms.katalon.core.testobject.impl.HttpFormDataBodyContent //for form data body
import com.kms.katalon.core.testobject.impl.HttpUrlEncodedBodyContent //for URL encoded text body
setBodyContent(new HttpTextBodyContent(your_text))
(API docs for HttpBodyContent implementation.)

Making a Google BigQuery from Python on Windows

I am trying to do something which is very simple in other data services. I am trying to make a relatively simple SQL query and return it as a dataframe in python. I am on Windows 10 and using Phython 2.7 (specifically Canopy 1.7.4)
Typically this would be done with pandas.read_sql_query but due to some specifics with BigQuery they require a different method pandas.io.gbq.read_gbq
This method works fine unless you want to make a Big Query. If you make a Big Query on BigQuery you get the error
GenericGBQException: Reason: responseTooLarge, Message: Response too large to return. Consider setting allowLargeResults to true in your job configuration. For more information, see https://cloud.google.com/bigquery/troubleshooting-errors
This was asked and answered before in this ticket but neither of the solutions are relevant for my case
Python BigQuery allowLargeResults with pandas.io.gbq
One solution is for python 3 so it is a nonstarter. The other is giving an error due to me being unable to set my credentials as an environment variable in windows.
ApplicationDefaultCredentialsError: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
I was able to download the JSON credentials file and I have set it as an environment variable in the few ways I know how but I still get the above error. Do I need to load this in some way in python? It seems to be looking for it but unable to find is correctly. Is there a special way to set it as an environment variable in this case?
You can do it in Python 2.7 by changing the default dialect from legacy to standard in pd.read_gbq function.
pd.read_gbq(query, 'my-super-project', dialect='standard')
Indeed, you can read in Big Query documentation for the parameter AllowLargeResults:
AllowLargeResults: For standard SQL queries, this flag is
ignored and large results are always allowed.
I have found two ways of directly importing the JSON credentials file. Both based on the original answer in Python BigQuery allowLargeResults with pandas.io.gbq
1) Credit to Tim Swast
First
pip install google-api-python-client
pip install google-auth
pip install google-cloud-core
then
replace
credentials = GoogleCredentials.get_application_default()
in create_service() with
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file('path/file.json')
2)
Set the environment variable manually in the code like
import os,os.path
os.environ['GOOGLE_APPLICATION_CREDENTIALS']=os.path.expanduser('path/file.json')
I prefer method 2 since it does not require new modules to be installed and is also closer to the intended use of the JSON credentials.
Note:
You must create a destinationTable and add the information to run_query()
Here is a code that fully works within python 2.7 on Windows:
import pandas as pd
my_qry="<insert your big query here>"
### Here Put the data from your credentials file of the service account - all fields are available from there###
my_file="""{
"type": "service_account",
"project_id": "cb4recs",
"private_key_id": "<id>",
"private_key": "<your private key>\n",
"client_email": "<email>",
"client_id": "<id>",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "<x509 url>"
}"""
df=pd.read_gbq(qry,project_id='<your project id>',private_key=my_file)
That's it :)

Does springfox-swagger2 UI support choosing multiple files at once?

I use Spring Boot and integrated swagger-ui (springfox-swagger2) and I want to be able to choose to upload multiple files at once. Unfortunately the Swagger UI doesn't appear to allow this, at least not give my controller method.
My controller method signature:
#ApiOperation(
value = "batch upload goods cover image",
notes = "batch upload goods cover image",
response = UploadCoverResultDTO.class,
responseContainer = "List"
)
public Result<?> uploadGoodsCover(#ApiParam(value = "Image array", allowMultiple = true,
required = true) #RequestPart("image") MultipartFile[] files) throws IOException {
Swagger UI generated:
But I was expecting a UI similar to this:
It's more convenient to choose all pictures in a folder in one go rather than choose one at a time e.g.:
<input type="file" name="img" multiple="multiple"/>
Does springfox-swagger2 support this? If so, what changes do I need to make?
Update: as pointed out by #Helen, this is now supported in Swagger 3.26.0 with OpenAPI 3 and should be in the next release of Springfox 3
Springfox 2: unfortunately the answer is no.
Springfox Swagger2 does not support this because it's not yet supported by Swagger: https://github.com/springfox/springfox/issues/1072
Relevant Swagger issues:
https://github.com/swagger-api/swagger-ui/issues/4600 (fixed in 3.26.0)
https://github.com/OAI/OpenAPI-Specification/issues/254