how to reduce AWS SES bounce rate with respect to QA environment automation scripts - amazon-ses

We are using AWS SES service for our email/notification service. Our QA team is running automation scripts while testing and our email bounce rate is increasing rapidly. Could any one advise what could be the best practice and how can we reduce our bounce rate with respect to the QA automation scripts.

Related

import macros with .jar extension produced by Selenium

Good morning to everyone,
i would like to understand if it is possible and how to import macros with .jar extension produced by Selenium into webinspect (version 21.2) and then use them to conduct a scan.let me try to explain, on our machines we only have webinspect, the tests with selenium are run by other people on other systems. we wanted to understand if by simply passing us these files webinspect would be able to read and execute them, or if it is necessary to put the webinspect proxy in the selenium scripts while this other team records the macros. Can anyone help me? Thank you
I need to speed up a process by taking advantage of macros already registered to do other tests prior to mine, so as to avoid a new registration
ps: I have already read the documentation, but it does not explain whether it is actually possible to do what I asked, it explains other procedures.
Unfortunately, WebInspect's integration with Selenium is essentially a replay of the Selenium scripts in real-time, used as the Crawl phase of the scan. WebInspect cannot simply consume your JAR file. It will require you set up a listener/proxy of some sort, so when the script replays, WebInspect can capture the traffic, and then it performs an Audit-Only of what it saw. There are two methods to insert this proxy technology into the process, as detailed in the WebInspect Help. The user must configure some features so when WebInspect replays the Selenium script everything connects automatically.
e.g. from WI 22.10:
file:///C:/ProgramData/HP/HP%20WebInspect/Help/WebInspect/index.htm#Selenium_WD_1.htm?TocPath=Using%2520WebInspect%2520Features%257CIntegrating%2520with%2520Selenium%2520WebDriver%257C_____0
Besides Selenium, there are several other alternatives when it comes to Functional Testing driven WebInspect scans. You had asked about requiring the dev staff to record something to provide to your for WebInspect.
Have the QA team capture their Selenium test runs using BURP Proxy. Have them save that captured proxy traffic as an artifact for the security team, e.g. "macro1.burpcap". Use the Workflow-driven Scan wizard options in WebInspect and simply Import that BURP capture as a native Workflow Macro. I like this option since BURP is easy to acquire and run, and supports multiple OS as a Java app.
WebInspect's Web Proxy could also be used, as BURP was used above. However, this complicates things for your dev team, as they do not have access to WebInspect. There are other free options for WebInspect customers which your dev team could install, including the Standalone WebInspect Toolkit, the Web Proxy standalone tool, or the Web Proxy API tool (REST service). One annoyance with all of these today is that they (currently) require Windows, and it requires an authorized WebInspect user (you) to download and deploy these installers inside your network for the dev staff to get.
The WebInspect REST API offers several endpoints for Proxy listeners. This means that remote users (i.e. your Dev) could spawn a proxy listener, run their functional test script through that proxy, then have the captured data saved as a Workflow Macro, and kill the listener. By itself, this combination could produce the artifacts your appsec team will want to use later in their Workflow-driven Scan.
To support this with further automation ("developer-driven DAST"), you could have those same proxy API calls add on a New Scan endpoint call at the end, to go ahead and trigger a Workflow-driven Scan using the Macro that was just recorded in the prior calls. Good for putting in a cicd pipeline, provided you have a dedicated WebInspect machine sitting on the network with its API available.
The challenge with using the WebInspect API as a "poor man's pipeline scanning tool" is that WebInspect is simplistic and has no resource management features in and by itself. This means that your pipelines could trigger lots of scans quickly, and the WebInspect machine would fall over after 4+ scans got started. What a mess! You would have to design API checks into your pipelines to monitor the number of Running scans on the WebInspect machine, and then Pause/Poll the pipeline until the WebInspect machine was free and the pipeline could then submit its new scan order.
Our solution for this sort of enterprise automation would be to use Fortify ScanCentral DAST instead of just WebInspect standalone. SCDAST affords a central web GUI for your appsec staff to configure/operate/review scans, with multiple headless "WebInspect" scan machines managed in resource pools. Scan orders coming in (REST API calls) would be queued and prioritized automatically, and the remote scan machines would be brought on-line/shutdown as needed (think as headless WebInspect API on Docker). So now your cicd pipelines can simply trigger the DAST scan and not worry about the scanner machine resources.
This brings me around to another great option for your Selenium needs, the Fortify FAST Proxy. This solution only operates with ScanCentral DAST, which is why I had to go on that side tangent above. With FAST Proxy, your dev would only start up the FAST Proxy (includes authentication details on the ScanCentral API), run their Selenium scripts through that proxy, and then kill the FAST Proxy when done. That completes their Functional Testing with Selenium. Meanwhile, on shutdown, the FAST Proxy automatically delivers the captured traffic to ScanCentral as a New Workflow-driven Scan order. In a little while, the DAST scan of their Selenium script traffic has completed. If you configured Notifications, dev now receives a link to their appsec results.

API automation execution from CI/CD Platforms

My question is about API automation execution from CI/CD Platforms like Jenkins/Bamboo/Azure etc.
For API automation, it is important to have control on the machine from which the API’s are triggered as we may need to open Firewall for some API’s or add Certificates to java of that machine.
But if I run my API test from the CI/CD agent machines, I can not have that control as those agent machines are maintained by organizational level team who will not entertain any such modifications for the agent machines as they are used by all other teams.
How to overcome this issue? How is actually done?
If anyone out there, faced the same situation in their own company, I want to hear from them.
Thanks a lot for your support.

paid service that can send big amount of concurrent requests to my site for testing?

Is there a service that can load my website with big number of concurrent calls, service that can go to my site, authorize and do some things. Which one do you recommend?
There is no service which will do everything for you, you will have to choose a load testing tool, script your scenario(s) and then choose the online service which supports the load testing tool of your choice.
Few examples:
LoadRunner and StormRunner
Gatling and Gatling Frontline
k6 and LoadImpact
Apache JMeter, Locust, Grinder, Gatling, Tsung, https://www.selenium.dev/enter link description here, etc. and BlazeMeter

Automated testing of local storage

we recently implemented on our websites local storage. I am curious if there is some way to automate testing of local storage. I don't mean unit test etc. I am looking for some tools from testers perspective for example - possibility to test it with Selenium or other test automation tool.
Thank you

Demandware site testing

We are using Demandware for our eCommerce site so they are giving sandboxes for development and testing.
I am automating the site for regression testing. But if I run automation scripts on Testing sandbox, sometimes it is taking longer time to load the page as a result test fails.
So what is the best way to do automation testing on Demandware related websites ?
Is it possible to deploy site to Cloud ?
Is is possible to increase the performance of testing related sandbox ? So tests will not fail?
Can you please suggest your thoughts?
Use development instance for these tests, as it is close to production in terms that it uses Akamai CDN, so the loading of pages will be relatively faster.
If sandboxes/development instances are performing slow, it may be good idea to look in Pipeline profiler in Demandware Business Manager to get insight as to where the performance bottleneck is lying.