Newman loads the pfx certificate but it is not used to connect to the endpoint - ssl

I'm having issue in execute a postman collection, from newman, which involves loading a pfx certificate to establish a TLSMA connection.
From the Postman application, the certificate is loaded correctly (from the setting) and used for the domain https://domain1.com to connect with a TLSMA counterpart server.
When I export the json collection and environment there is no mention about domain and certificate associated.
Checking the json schema here newman accepts a certificate definition in the request but applying it does not work, here my example:
"request": {
"method": "GET",
"header": [],
"certificate": {
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"cert": { "src": "./certificate.pfx" }
},
"url": {
"raw": "https://domain1.com/as/authorization.oauth2",
"host": ["https://domain1.com"],
"path": ["as", "authorization.oauth2"],
"query": [
{
I also tried to apply the certificate configuration in an external file cert-list.json with the following content:
[{
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"cert": { "src": "./certificate-t.pfx" }
}]
but it does not work either.
Here the newman command:
newman run domain.postman_collection.json -n 1 --ssl-client-cert-list cert-list.json -e env.postman_environment.json -r cli --verbose
Do you know where I am doing wrong?

Change cert to pfx
try:
[{
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"pfx": { "src": "./certificate-t.pfx" }
}]

Related

ASP.NET Core (3.1) re-using IIS certificate in self-hosted (Kestrel) application

I have an ASP.NET Core (3.1) application which is self-hosted and running as a service. I would like to expose an HTTPS endpoint for it. On the same machine there is an IIS instaled with already configured https together with certificate:
The certificate seems to stored in local computer certificate store:
I can also list it via the powershell:
> get-childitem cert:\LocalMachine\My\ | format-table NotAfter, Subject
NotAfter Subject
-------- -------
27.10.2023 07:38:45 <irrelevant>
08.03.2022 09:52:44 CN=a7642e58-2cdf-4e9b-a277-60fad84d7c64, DC=3336d6b0-b132-47ee-a49b-3ab470a5336e
23.02.2022 21:51:53 CN=a7642e58-2cdf-4e9b-a277-60fad84d7c64, DC=3336d6b0-b132-47ee-a49b-3ab470a5336e
27.10.2031 06:48:06 CN=a7642e58-2cdf-4e9b-a277-60fad84d7c64
26.10.2024 10:41:03 E=****.com, CN=****, OU=IT, O=****, L=****, S=***, C=**
I changed the appsettings.json to use the certificate from the store:
{
"Logging": {
"LogLevel": {
"Default": "Debug",
"System": "Information",
"Microsoft": "Warning"
}
},
"AllowedHosts": "*",
"Kestrel": {
"EndPoints": {
"Http": {
"Url": "http://*:5000"
},
"HttpsDefaultCert": {
"Url": "https://*:5001"
}
},
"Certificates": {
"Default": {
"Subject": "E=****.com, CN=****, OU=IT, O=****, L=****, S=***, C=**",
"Store": "My",
"Location": "LocalMachine",
"AllowInvalid": "true"
}
}
}
}
However this does not seem to work. I always get the following error:
System.InvalidOperationException: The requested certificate E=****.com, CN=****, OU=IT, O=****, L=****, S=***, C=** could not be found in LocalMachine/My with AllowInvalid setting: True
I do not know what could be the problem. The only thing that I think might be problematic is that the certificate subject actually contains newlines in the subject:
I do not know if this is the problem and I do not know how to enter it in the appsettings.json as multiline values can not be entered.
I've managed to track down the issue. Kestrel uses FindBySubjectName when searching for certificate.
FindBySubjectName does a sub-string search and will not match the full Subject of the certificate. If your certificate subject is something like 'CN=my-certificate' then searching for 'CN=my-certificate' will not find anything. Searching only for 'my-certificate' will work.
Additional note: In addition to specifying the correct search expression, make sure that the account under which you are running the application has sufficient permissions to read the certificate from certificate store. Certificates do have ACL so you do not have to run your app as an administrator.
I refer to the documentation for configuring the SSL certificates for Asp.NetCore app running on Kestrel.
I noticed some URL and ports settings also get stored in Properties/LaunchSettings.json file.
See Here: Configure endpoints for the ASP.NET Core Kestrel web server
Further, I noticed that you have put the Certificate under Defaults. I found other ways to configure the certificate. You could try to test them.
In the following appsettings.json example:
Set AllowInvalid to true to permit the use of invalid certificates (for example, self-signed certificates).
Any HTTPS endpoint that doesn't specify a certificate (HttpsDefaultCert in the example that follows) falls back to the cert defined under Certificates:Default or the development certificate.
{
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://localhost:5000"
},
"HttpsInlineCertFile": {
"Url": "https://localhost:5001",
"Certificate": {
"Path": "<path to .pfx file>",
"Password": "$CREDENTIAL_PLACEHOLDER$"
}
},
"HttpsInlineCertAndKeyFile": {
"Url": "https://localhost:5002",
"Certificate": {
"Path": "<path to .pem/.crt file>",
"KeyPath": "<path to .key file>",
"Password": "$CREDENTIAL_PLACEHOLDER$"
}
},
"HttpsInlineCertStore": {
"Url": "https://localhost:5003",
"Certificate": {
"Subject": "<subject; required>",
"Store": "<certificate store; required>",
"Location": "<location; defaults to CurrentUser>",
"AllowInvalid": "<true or false; defaults to false>"
}
},
"HttpsDefaultCert": {
"Url": "https://localhost:5004"
}
},
"Certificates": {
"Default": {
"Path": "<path to .pfx file>",
"Password": "$CREDENTIAL_PLACEHOLDER$"
}
}
}
}
Schema notes:
Endpoints names are case-insensitive. For example, HTTPS and Https are equivalent.
The Url parameter is required for each endpoint. The format for this parameter is the same as the top-level Urls configuration parameter except that it's limited to a single value.
These endpoints replace those defined in the top-level Urls configuration rather than adding to them. Endpoints defined in code via Listen are cumulative with the endpoints defined in the configuration section.
The Certificate section is optional. If the Certificate section isn't specified, the defaults defined in Certificates:Default are used. If no defaults are available, the development certificate is used. If there are no defaults and the development certificate isn't present, the server throws an exception and fails to start.
The Certificate section supports multiple certificate sources.
Any number of endpoints may be defined in Configuration as long as they don't cause port conflicts.
Reference: Replace the default certificate from configuration

GoDaddy API to create subdoman returns "The given domain is not registered, or does not have a zone file"

I'm trying to use GoDaddy's API to create a subdomain using the following http request:
PATCH /v1/domains/domainName.com/records
Host: api.ote-godaddy.com
Authorization: sso-key API_KEY:API_SECRET
Content-Type: application/json
Content-Length: 100
[
{
"data": "111.111.111.111",
"name": "subdomainName",
"ttl": 6000,
"type": "A"
}
]
but I get the following response:
{
"code": "UNKNOWN_DOMAIN",
"message": "The given domain is not registered, or does not have a zone file"
}
Please changes the host name as https://api.godaddy.com. your request will be work only production URL.
Please generate Production level API Key & SECRET.
Body: (Raw - Json Type)
[
{
"data": "YourServerIp",
"name": "subdomainName",
"port": 80,
"priority": 10,
"protocol": "string",
"service": "string",
"ttl": 600,
"type": "A"
}
]
Note:
It's only happened when primary domain already exists on your go daddy account
I figured out these requests only work using the production authorization against the production url but won't work using ote-authorization against the ote url. Maybe the url has to be set as an ote domain and not a production domain. Not sure.

How can i custom config CHANGELOG.md using standard-version npm package?

I'm using the command standard-version each time I want to publish new version, but the yielded changes in the CHANGELOG.md look like this:
### [10.1.9](https://github.com/my-project-name/compare/v10.1.8...v10.1.9) (2021-03-29)
### [10.1.8](https://github.com/my-project-name/compare/v10.1.7...v10.1.8) (2021-03-29)
### [10.1.7](https://github.com/my-project-name/compare/v10.1.6...v10.1.7) (2021-03-29)
first the links do not work - the github url is not correct and i want to configure it to the right url, and second, I'd like to configure the link that's shown in the changeslog file (there are some types)
I tried to use this documentation but didn't find anything that can help me
https://github.com/conventional-changelog/conventional-changelog
so how do I configure the way standard-version works on the CHANGELOG.md ? can someone provide example?
yes.
according to doc:
You can configure standard-version either by:
Placing a standard-version stanza in your package.json (assuming your project is JavaScript).
Creating a .versionrc, .versionrc.json or .versionrc.js.
If you are using a .versionrc.js your default export must be a configuration object, or a function returning a configuration object.
Any of the command line parameters accepted by standard-version can instead be provided via configuration.
Please refer to the conventional-changelog-config-spec for details on available configuration options.
example:
.versionrc
{
"types": [
{
"type": "feat",
"section": "Features"
},
{
"type": "fix",
"section": "Bug Fixes"
},
{
"type": "chore",
"hidden": true
},
{
"type": "docs",
"hidden": true
},
{
"type": "style",
"hidden": true
},
{
"type": "refactor",
"section": "Refactor"
},
{
"type": "perf",
"section": "Performance"
},
{
"type": "test",
"hidden": true
}
]
}

Packer ssh timeout

I am trying to build images with packer in a jenkins pipeline. However, the packer ssh provisioner does not work as the ssh never becomes available and error out with timeout.
Farther investigation of the issue shows that, the image is missing network interface files ifconfig-eth0 in /etc/sysconfig/network-scripts directory so it never gets an ip and does not accept ssh connection.
The problem is, there are many such images to be generated and I can't open each one manually in GUI of virtualbox and correct the issue and repack. Is there any other possible solution to that?
{
"variables": {
"build_base": ".",
"isref_machine":"create-ova-caf",
"build_name":"virtual-box-jenkins",
"output_name":"packer-virtual-box",
"disk_size":"40000",
"ram":"1024",
"disk_adapter":"ide"
},
"builders":[
{
"name": "{{user `build_name`}}",
"type": "virtualbox-iso",
"guest_os_type": "Other_64",
"iso_url": "rhelis74_1710051533.iso",
"iso_checksum": "",
"iso_checksum_type": "none",
"hard_drive_interface":"{{user `disk_adapter`}}",
"ssh_username": "root",
"ssh_password": "Secret1.0",
"shutdown_command": "shutdown -P now",
"guest_additions_mode":"disable",
"boot_wait": "3s",
"boot_command": [ "auto<enter>"],
"ssh_timeout": "40m",
"headless":
"true",
"vm_name": "{{user `output_name`}}",
"disk_size": "{{user `disk_size`}}",
"output_directory":"{{user `build_base`}}/output-{{build_name}}",
"format": "ovf",
"vrdp_bind_address": "0.0.0.0",
"vboxmanage": [
["modifyvm", "{{.Name}}","--nictype1","virtio"],
["modifyvm", "{{.Name}}","--memory","{{ user `ram`}}"]
],
"skip_export":true,
"keep_registered": true
}
],
"provisioners": [
{
"type":"shell",
"inline": ["ls"]
}
]
}
When you don't need the SSH connection during the provisioning process you can switch it off. See the packer documentation about communicator, there you see the option none to switch of the communication between host and guest.
{
"builders": [
{
"type": "virtualbox-iso",
"communicator": "none"
}
]
}
Packer Builders DOCU virtualbox-iso

Testing API via newman

Hi there I am testing an API via postman, I want to automate my tests and have downloaded newman. Now the request I use in postman has been exported as a collection and is giving me a 404 via newman.... Any pointers much appreciated. IP address has been changed for obvious reasons.
{
"id": "11f345f7-9f12-58fb-099d-27f11233cee7",
"name": "GC",
"description": "",
"order": [
"f7fe3f94-0dd2-6dba-05b9-29ae7e571ed9"
],
"folders": [],
"timestamp": 1446559540652,
"owner": "195242",
"remoteLink": "",
"public": false,
"requests": [
{
"id": "f7fe3f94-0dd2-6dba-05b9-29ae7e571ed9",
"headers": "",
"url": "http://218.24.201.144/cb/mobile/v1/residences/568288d0-71b6-11e5-ad9f-0242ac110908/lastAirQuality/rooms",
"pathVariables": {},
"preRequestScript": "",
"method": "GET",
"collectionId": "11f345f7-9f12-58fb-099d-27f11233cee7",
"data": [],
"dataMode": "params",
"name": "http://218.24.201.144/cb/mobile/v1/residences/568288d0-71b6-11e5-ad9f-0242ac110908/lastAirQuality/rooms",
"description": "",
"descriptionFormat": "html",
"time": 1446559548262,
"version": 2,
"responses": [],
"tests": "",
"currentHelper": "normal",
"helperAttributes": {}
}
]
}
this is the output I get in newman
$ newman -c GC.json.postman_collection
Iteration 1 of 1
404 218ms http://218.24.201.144/cb/mobile/v1/residences/568288d0-71b6-11e5-ad9f-0242ac110908/lastAirQuality/rooms http://218.24.201.144/cb/mobile/v1/residences/568288d0-71b6-11e5-ad9f-0242ac110908/lastAirQuality/rooms
Summary:
Parent Pass Count FailCount
-------------------------------------------------------------
Collection GC 0 0
Total
0 0
Do you have tests set?
var data = JSON.parse(responseBody);
tests["Pass this case"] = data.id === 11f345f7-9f12-58fb-099d-27f11233cee7;
Create Tests and don't forgot to update your json collection before testing.
Check this #298
404 is means the resource you are looking in is not exist, or the system is unable to find the requested data, you can try using
$ newman run <path of you collection>
npm install postman
npm install newman
npm install newman-reporter-html
https://github.com/shahing/api-automation-tests
run command in your directory : newman run test.js