.Is it possible to have create local repositories using CUrl commands in JFrog? - api

I have hosted JFrog-OSS docker container which is running behind Nginx along with DNS attached to it .I want to create new repositories(local ones) using REST Apis (curl commands ) .Since it's a free version .Is it possible to have create local repositories using CUrl commands in JFrog ?.
curl -s -uadmin:password -X PUT -H 'Content-Type: application/json' https://devops.com/artifactory/libs-release-local/path/to/directory/ -d '
[ {
"\key"\ : "\example-repo-local"\,
"\description"\ : "\artifactory repository"\,
"\type"\ : "\LOCAL"\,
"\url"\ : "\https://devops.com/artifactory/example-repo-local"\,
"\packageType"\ : "\Generic"\
}'

Creating repositories via REST API requires Artifactory Pro.
Also, the url and JSON configurations seems wrong.
The url that the request is sent to should not include path, but a repository only.
The "type" key should be "rclass". "url" key is only for remote.
See documentation for more details.

Related

How to Programmatically communicate with Restful API using SSH tunnelling

I am building a Restful API client using .NET core framework. The APIs are OpenStack API, however, because of the network configuration, I cannot access the API from my local computer (also development computer), I have to ssh into a machine that can ssh into the OpenStack infrastructure when accessing OpenStack normally.
Bearing this in mind, is it possible to use SSH tunnel for the API endpoints and then call it in the implemented Web API client? I have tried to do this, but the call to the endpoint returns error 401 - content length is required.
Basically its possible to call the Openstack API endpoint through an SSH-tunnel without any public accessible API-endpoints. Bacause I have no experience with .NET core framework this answer is really generic without C# code. I hope it helps you anyway.
IMPORTANT: You can use the following steps only, when you have a admin-login to the openstack-deployment and you should ONLY!!! use this way, when the openstack-deployment is a test-deployment, where breaking the deployment doesn't affect other users.
1. SSH-tunnel
You can forward ports with the command:
ssh -L 127.0.0.1:<PORT>:<IP_REMOTE>:<PORT> <USER_JUMPHOST>#<IP_JUMPHOST> -fN
<PORT> = Port of the Openstack-Component you want to access remotely (for example 5000 for keystone)
<IP_REMOTE> = IP of the host, where your openstack deployment is running
<USER_JUMPHOST>#<IP_JUMPHOST> = ssh-access to the jumphost, which is between you and your openstack deployment
This has to be done for each openstack component. If you don't want this command in the backgroup remove the -fN at the end.
Here at first you have to forward Keystone with port 5000.
example: ssh -L 127.0.0.1:5000:192.168.62.1:5000 deployer#192.168.67.1 -fN
You can test the access via curl or webbrowser from your local pc:
curl http://127.0.0.1:5000
{"versions": {"values": [{"id": "v3.13", "status": "stable", "updated": "2019-07-19T00:00:00Z", "links": [{"rel": "self", "href": "http://127.0.0.1:5000/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}
2. change openstack-endpoints
To be also able to login on the openstack deployment through the tunnel, you have to change the endpoints to listen to the localhost on the remote system too, where your openstack depoyment is:
Login normally on your openstack deployment as admin-user.
List all endpoints: openstack endpoint list
Change the public and internal-endpoint of keystone to localhost:
openstack endpoint set --url http://127.0.0.1:5000 <ID_OF_INTERNAL_KEYSTONE_ENDPOINT>
After changing the internal endpoint it will break the openstack-login on the remotesystem for now, but don't worry.
Now you can login to the openstack via openstack-client from your local pc. Here you have to authorize against local-host. If you use an rc-file to login, you have to change the auth-url to export OS_AUTH_URL=http://127.0.0.1:5000/v3
Change the nova endpoints by running on your local pc openstack endpoint set --url "http://127.0.0.1:8774/v2.1" <ID> for the internal and public endpoint of nova to run commands like openstack server list through your ssh-tunnel (of course you need also an ssh-tunnel for port 8774) to do this.
3. Authorize against the openstack-deployment
When you send HTTP-Requests without the openstack-client, you have to manually request an authentication token from the deployment:
Login normally on your openstack deployment
Make a Token-Request:
curl -v -s -X POST "$OS_AUTH_URL/auth/tokens?nocatalog" -H "Content-Type: application/json" -d '{ "auth": { "identity": { "methods": ["password"],"password": {"user": {"domain": {"name": "'"$OS_USER_DOMAIN_NAME"'"},"name": "'"$OS_USERNAME"'", "password": "'"$OS_PASSWORD"'"} } }, "scope": { "project": { "domain": { "name": "'"$OS_PROJECT_DOMAIN_NAME"'" }, "name": "'"$OS_PROJECT_NAME"'" } } }}' --stderr - | grep X-Subject-Token
This command can be used without changes. The Value after the Key X-Subject-Token is the token from Keystone. Copy this value and export the token as the environment variable OS_TOKEN. For example like the following line
export OS_TOKEN=gAAAAABZuj0GZ6g05tKJ0hvihAKXNJgzfoT4TSCgR7cgWaKvIvbD66StJK6cS3FqzR2DosmqofnR_N-HztJXUcVhwF04HQsY9CBqQC7pblGnNIDWCXxnJiCH_jc4W-uMPNA6FBK9TT27vE5q5AIa487GcLLkeJxdchXiDJvw6wHty680eJx3kL4
Make requests with the token.
For example GET-Requests with curl:
curl -s -X GET -H "X-Auth-Token: $OS_TOKEN" http://127.0.0.1:5000/v3/users | python -m json.tool

Unable to connect server via parse dashboard?

I have set up the parse-server and parse-dashboard in a dedicated instance on Alibaba Cloud with 2vCPU and 8GB of RAM with ApsaraDB for MongoDB as DB for Parse.
I successfully setup the dashboard and server. When I tried to access the server I get the following error,
"Unable to connect to server."
Parse Dashboard Error Screenshot
I am successfully able to make POST and GET Requests like below
curl -X POST \
-H "X-Parse-Application-Id: APPLICATION_ID" \
-H "Content-Type: application/json" \
-d '{"score":1337,"playerName":"Sean Plott","cheatMode":false}' \
http://localhost:1337/parse/classes/GameScore
//Response
{
"objectId": "2ntvSpRGIK",
"createdAt": "2016-03-11T23:51:48.050Z"
}
I am able to connect via putty and FTP.
Thanks
While your computer (by that I mean other applications) is reaching the server the Parse dashboard isn't.
Inside your dashboard config you can try changing
http://localhost:1337/parse
to
http://[ip-address]:1337/parse
Also go through this thread, you might be able to get some more insight on why it is not working:
https://github.com/parse-community/parse-dashboard/issues/785

Load balancing in KONG API Gateway

We have multiple instance of a micro service behind the Kong API gateway where we want to balance the load for the user requests.
Say Micro service 1 is multiplied in multiple instances which are kept behind the KONG API gateway; in such case the request from user 1 should hit the first instance and the request from user 2 should hit some other instance of same service based on their availability (load balancing). (ie) Whether can i have multiple upstream URL for a single API in kong. we dont want to use nginx for load balancing. Please advice how can we solve it.
Ring-balancer Strategy can be used in Kong if you don't want DNS-based loadbalancing. For details please refer to Kong Load Balancing Reference!
# create an upstream
$ curl -X POST http://kong:8001/upstreams \
--data "name=address.v1.service"
# add two targets to the upstream
$ curl -X POST http://kong:8001/upstreams/address.v1.service/targets \
--data "target=192.168.34.15:80"
--data "weight=100"
$ curl -X POST http://kong:8001/upstreams/address.v1.service/targets \
--data "target=192.168.34.16:80"
--data "weight=50"
# create an API targeting the Blue upstream
$ curl -X POST http://kong:8001/apis/ \
--data "name=address-service" \
--data "hosts=address.mydomain.com" \
--data "upstream_url=http://address.v1.service/address"
Requests with host header set to address.mydomain.com will now be proxied by Kong to the two defined targets; 2/3 of the requests will go to http://192.168.34.15:80/address (weight=100), and 1/3 will go to http://192.168.34.16:80/address (weight=50).
Starting from 0.10 you will be able to create a named upstream, and associate/remove targets from it.
For example if you have upstream_url=http://helloworld/ you can create a helloworld upstream and associate targets to it:
curl -d "name=helloworld" 127.0.0.1:8001/upstreams
curl -d "host=some.host.com" 127.0.0.1:8001/upstreams/helloworld/targets/
curl -d "host=2.2.2.2" 127.0.0.1:8001/upstreams/helloworld/targets/

Use Nexus 3 API to change admin password

I would like to use the Nexus 3 api to change the admin default password as well as the email address using groovy. But I don't understand how to set the password using the groovy api. Can someone provide an example of how to do this?
Summary
You can use the REST API to both update user information and change their password. This includes the admin user.
Nexus REST API: Update user information
The default admin user-data.json in my instance is the following:
{
"userId": "admin",
"firstName": "Administrator",
"lastName": "User",
"emailAddress": "admin#example.org",
"source": "default",
"status": "active",
"readOnly": false,
"roles": [
"nx-admin"
],
"externalRoles": []
}
Update the user-data.json to your desired values and use curl with the REST API.
NX_PASSWORD="admin user password"
curl -ifu admin:"${NX_PASSWORD}" \
-XPUT -H 'Content-Type: application/json' \
--data "$(< user-data.json)" \
<nexus base URL>/service/rest/v1/security/users/admin
Nexus REST API: Change password
You'll want to use the Security Management API.
See Nexus 3 backend source code.
OLD_PASSWORD="nexus admin password"
NEW_PASSWORD="your new password"
curl -ifu admin:"${OLD_PASSWORD}" \
-XPUT -H 'Content-Type: text/plain' \
--data "${NEW_PASSWORD}" \
<nexus base URL>/service/rest/v1/security/users/admin/change-password
Screenshot of Nexus documentation
This documentation is only available on a running Nexus instance. You can view this API on your own running Nexus instance by visiting:
Menu: System configuration > System > API.
Old way: Change password during initial onboarding
This only works during initial onboarding. You should definitely not use this method. Just documenting for completeness.
This section is for changing the initial password during onboarding.
Referencing Nexus source
Frontend code
Backend code
You can change the admin user password with a single curl command.
OLD_PASSWORD="initial nexus password"
NEW_PASSWORD="somepass"
curl -ifu admin:"${OLD_PASSWORD}" \
-XPUT -H 'Content-Type: text/plain' \
--data "${NEW_PASSWORD}" \
<nexus base URL>/service/rest/internal/ui/onboarding/change-admin-password
I originally thought changePassword was deprecated, but I was mistaken. Here is an example of updating admin email address and changing the password:
def user = security.securitySystem.getUser('admin')
user.setEmailAddress('admin#mycompany.com')
security.securitySystem.updateUser(user)
security.securitySystem.changePassword('admin','admin456')
Sonatype Nexus has change-admin-password internal api to update the admin password, but its not straight forward to use, it's using the session id that's created with the /service/rapture/session endpoint.
curl -v 'https://<hostname>/service/rapture/session' --data 'username=<base64 username>&password=<base64 password>'
curl -v -X PUT 'https://<hostname>/service/rest/internal/ui/onboarding/change-admin-password' -H 'cookie: <NXSESSIONID form the above response>' --data '<plain text password>'
Reference:
https://github.com/sonatype/nexus-public/blob/9b177ab50bd7f8470b08247b146da459170ecc8f/plugins/nexus-onboarding-plugin/src/main/resources/static/rapture/NX/onboarding/step/ChangeAdminPasswordStep.js#L50
Install the nexus3 cli:
pip install nexus3-cli
Get the first initial password (Assuming Nexus is running in docker):
docker exec nexus cat /nexus-data/admin.password
Set environment variables:
export NEXUS3_PASSWORD=<PASSWORD FROM PREVIOUS STEP>
export NEXUS3_USERNAME=<USERNAME>
export NEXUS3_URL=<URL>
Allow remote script execution by updating /nexus-data/etc/nexus.properties and appending the below line:
nexus.scripts.allowCreation=true
Restart nexus service to reload the last change:
docker container restart <nexus>
Create a file reset-password.groovy with the following contents (Thanks to #Dennis Hoer) :
def user = security.securitySystem.getUser('admin')
user.setEmailAddress('admin#mycompany.com')
security.securitySystem.updateUser(user)
security.securitySystem.changePassword('admin','admin456')
From command line create the script and run the script to reset the password of admin:
nexus3 script create --script-type groovy passreset reset-password.groovy
nexus3 script run passreset
The password is now reset

mule server auto deployment details

I am using mmc for deployment of mule based application. All deployed application through mmc goes to apps directory under mule server. If I put an application directly under mule-server/apps, launch the application, application runs successfully, but I am not able to view the deployment details in mmc. Where I need to make the changes in mule server to view the deployed application details in mmc?
I need to make a auto deployment through shell script. For this, I am creating a sample project, create zip file, copy this zip file under mule server apps direction. Finally, once mmc is launched, I need to see the deployed application in mmc for viewing flows, running application, flow analyzer etc.
EDIT
Based on answer given below to deploy a new application, I have given my url as:
curl --basic -u admin:admin -F file=#C:/apps/testserver-1.0.0-SNAPSHOT.zip -F name=test-app -F version=2.0 --header 'Content-Type: multipart/form-data' http://almule1.ux.corp.local:8585/mmc/api/repository
Response I received:
curl: (6) Could not resolve host: multipart {"versionId":"local$5015b8d6-b149-4245-a218-55c12aecc8e7","applicationId":"local$74616cb9-9ecb-4fd6-b167-bf153c8e59fb"}
I am using windows env to deploy in unix server.
MMC retrieves information from Mule ESB, so you shouldn't make any changes. Anyway I think that deploying an application outside MMC is not a good idea. For scripting purposes I would prefer to use MMC Deployments REST API. You can deploy an application by simply running:
Upload zipped application
This uploads your application to MMC.
curl --basic -u admin:admin -F file=#my-zipped-app.zip -F name=test-app -F version=2.0 --header 'Content-Type: multipart/form-data' http://localhost:8080/mmc/api/repository
List available servers
curl --basic -u admin:admin http://localhost:8080/mmc-console-3.4.0/api/servers
There you should get the server Id (let's suppose it is local$26f2fea8-3b7c-45a7-84a8-d1509e73fca4), then use it in this command:
Create deployment
Before starting your application you need to create a deployment telling MMC to which server, and the id of the uploaded application.
curl --basic -u admin:admin -d '{"name" : "ExampleDeployment" , "servers": [ "local$26f2fea8-3b7c-45a7-84a8-d1509e73fca4" ], "applications": [ "local$32bb47d3-d180-4bb9-8906-2378dad9ae21" ]}' --header 'Content-Type: application/json' http://localhost:8080/mmc/api/deployments
Perform deploy
Once you have a server and a deployment you can finally start the application.
curl --basic -u admin:admin -X POST 'http://localhost:8080/mmc/api/deployments/local$97e3c184-09ed-423e-a5a5-9b94713a9e36/deploy'
Here is the auto deployment on windows environment which deploys in Unix server.
Application Name: testserver-1.0.zip
step1: Upload
curl --basic -u admin:admin -F file=#C:/apps/testserver-1.0.zip -F name=auto-deploy-server -F version=1.0 --header "Content-Type: multipart/form-data" http://allmule1.ux.corp.local:8585/mmc/api/repository
Response:
{"versionId":"local$fd507b45-25c2-4cc9-afe9-9f020f685867","applicationId":"local$47bcf1f3-72bc-4c08-ba50-4fe33422199c"}
step2: Get server details:
curl --basic -u admin:admin http://allmule1.ux.corp.local:8585/mmc/api/servers
Response:
{"total":1,"data":[{"agents":[{......,"agentUrl":"https://localhost:7777/mmc-support","version":"3.4.2","name":"Mule-3.4.2","id":"local$5a6c4f81-7b35-425d-95bd-200224f60a2b"}]}
Note: Here server id is: local$5a6c4f81-7b35-425d-95bd-200224f60a2b
Get the VERSION ID (not application id) from step 1
step3: deployments
curl --basic -u admin:admin -d "{\"name\" : \"Auto-Deployment\" , \"servers\": [ \"local$5a6c4f81-7b35-425d-95bd-200224f60a2b\" ], \"applications\": [ \"local$fd507b45-25c2-4cc9-afe9-9f020f685867\" ]}" --header "Content-Type: application/json" http://allmule1.ux.corp.local:8585/mmc/api/deployments
Response:
{"applications":["local$fd...,"name":"Auto-Deployment","id":"local$9062bbe7-75ab-4658-b021-8314b1681511","lastModified":"Wed, 18 Jun 2014 12:27:30.610 PDT"}
Note here Deployment Id: local$9062bbe7-75ab-4658-b021-8314b1681511
Step4: Deploy
curl --basic -u admin:admin -X POST http://allmule1.ux.corp.local:8585/mmc/api/deployments/local$9062bbe7-75ab-4658-b021-8314b1681511/deploy
Response: The deployments were deployed
verify your server console, application might have deployed.
Redeploy
curl --basic -u admin:admin -X POST http://allmule1.ux.corp.local:8585/mmc/api/deployments/local$9062bbe7-75ab-4658-b021-8314b1681511/redeploy
Undeploy:
curl --basic -u admin:admin -X POST http://allmule1.ux.corp.local:8585/mmc/api/deployments/local$9062bbe7-75ab-4658-b021-8314b1681511/undeploy
Automated Deployment with Mule Management Console and Maven
https://dzone.com/articles/automated-deployment-mule?mz=38541-devops