How to overwrite the api proxy deployment using apigeetool - api

I am using the below command in jenkins to deploy the api proxies to apigee edge.
apigeetool deployproxy -u abc -o nonprod -e dev -n poc-jenkins1 -p xyz
But am getting the below error.
Error: Path /poc-deployment-automation conflicts with existing deployment path for revision 1 of the APIProxy poc-deploy-automation in organization nonprod, environment dev
Here is my requirement , please help me what command to use.
If API doesn’t exist in target environment, Create Api in new environment with version 1.
If API already exist in target environment, Create Api in new environment with new version (previous version + 1)
So what command should we use to fix the above error and what should we use to do the above 2 tasks.
Help Appreciated.

The apigeetool deployproxy command supports by default your requirements. It deploys the revision 1 if there is no proxy with the name, and increases the revision if it already exists.
However, based on the error you mentioned, it seems that you have a path conflict between two proxies. You are trying to deploy a proxy to a /poc-deployment-automation basepath, but there is another proxy called poc-deploy-automation which is listening on the same basepath. It is not possible, even if the proxy name is different, because the basepath is what apigee uses to redirect traffic to your proxy.
Check the xml file at the root of your proxy and change the basepath attribute.
Also, the basepath of an API Proxy can be anything, but could not be the same used at the same time by two proxies--only one can be deployed at time. The revision numbers are irrelevant in this situation.

Related

Undeploy API from Apigee X Environment of type "archive"

Does anyone have an idea how to "undeploy" an API proxy from an "archive" type Apigee-x environment? It seems like it can't be done from the Apigee UI, it throws an error:
"This operation is not supported. The Environment DeploymentType is ARCHIVE. The required Environment DeploymentType is PROXY".
The environment type can't be changed. The available CLI commands are "delete", "deploy", "describe", "list", "update" (no "undeploy" command found), "delete" doesn't work as it can't delete an active deployment. The final goal is to be able to delete the environment, which requires to remove/undeploy all API proxies from it first.
I found a solution. The "undeploy" feature I was looking for is not included in the current Apigee-x release. On the Apigee community, Google staff stated that they are looking into implementing it at some point. Until then there is a workaround, where one can deploy an archive with no deployments defined to the environment. Once this is done the Proxy is "undeployed" and the environment could be deleted. Here is the step-by-step process of doing it.

Apache Custom Module permission issue with calling Libipset

I'm working on an apache module that can check the libipset API to test if an IP is in a list. This is being used as a backup firewall for proxied connections.
I've managed to get everything working up until the C script calls type = ipset_type_get(session, cmd);. After testing, I believe the main problem is that libipset requires higher permissions. I'm not getting a permission error, just a null value. However, when I run the C script directly using apache as the user, I can get it to work when I grant sudo privileges to apache for the script.
I've tried 1 and 2 in the answers here and they've both failed. Is there any other way to force root for the ipset API call?
This action might need cap_net_admin.
If using systemd to control the process, you can add it like this:
[Service]
...
CapabilityBoundingSet=CAP_NET_ADMIN
Another approach would be to set the binary executable's capabilities.
setcap cap_net_admin=ep /usr/sbin/apache2
If using apparmour, you coould instead set up a profile for apache and include the line
capability net_admin,
in the file ( /etc/apparmor.d/usr.sbin.apache2 )
( see here : https://serverfault.com/questions/932410/enabling-apparmor-for-apache2-in-ubuntu-18-04 )

Kubernetes - env variables as API url

So I have an API that's the gateway for two other API's.
Using docker in wsl 2 (ubuntu), when I build my Gateway API.
docker run -d -p 8080:8080 -e A_API_URL=$A_API_URL B_API_URL=$B_API_URL registry:$(somePort)//gateway
I have 2 environnement variables that are the API URI of the two API'S. I just dont know how to make this work in the config.
env:
- name: A_API_URL
value: <need help>
- name: B_API_URL
value: <need help>
I get 500 or 502 errors when accessing then in the network.
I tried specifyng the value of the env var as:
their respective service's name.
the complete URI (http://$(addr):$(port)
the relative path : /something/anotherSomething
Each API is deployed with a Deployment controller and a service
I'm at a lost, any help is appreciated
You just have to hardwire them. Kubernetes doesn't know anything about your local machine. There are templating tools like Helm that could inject things like Bash is in your docker run example but generally not a good idea since if anyone other than you runs the same command, they could see different results. The values should look like http://servicename.namespacename.svc.cluster.local:port/whatever. So if the service is named foo in namespace default with port 8000 and path /api, http://foo.default.svc.cluster.local:8000/api.

I am trying OpenShift origin, I cannot create application

I am trying OO on a RHEL Atomic Host. I spun up OO master as a container following this guide https://docs.openshift.org/latest/getting_started/administrators.html
After attaching a shell to the Master Container, I cannot deploy an app.
# oc new-app openshift/deployment-example
error: can't look up Docker image "openshift/deployment-example": Internal error occurred: Get https://registry-1.docker.io/v2/: net/htt p: request canceled while waiting for connection error: no match for "openshift/deployment-example"
The 'oc new-app' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Templates in the current project or the 'openshift' project
4. Git repository URLs or local paths that point to Git repositories
--allow-missing-images can be used to point to an image that does not exist yet.
See 'oc new-app -h' for examples.
The host needs proxy to access Internet. I have configured proxy in /etc/sysconfig/docker and that is how I could pull the origin image in the same place.
I have tried setting proxy for master and node with luck
https://docs.openshift.org/latest/install_config/http_proxies.html
It is possible that your proxy is terminating the connection. you can test by creating an internal registry, push image to that and then use
"oc new-app your.internal.registry/openshift/deployment-example"

cloudfoundry - vmc register error with external uri

I installed cloudfoundry with the -D option to change the default domain. Cloudfoundry installs fine and starts but when I try to vmc in I get an error:
swampfox#swampcf:~$ vmc target api.mydomain.com
Successfully targeted to [http://api.mydomain.com]
swampfox#swampcf:~$ vmc register --email emailid#gmail.com --passwd mypass
Creating New User: OK
Attempting login to [http://api.mydomain.com]
Problem with login to 'http://api.mydomain.com', target refused connection (getaddrinfo: Name or service not known), try again or register for an account.
swampfox#swampcf:~$ vmc register --email emailid#gmail.com --passwd mypass
Creating New User: Error 100: Bad request
Can someone help. I need to have the external uri or this is useless for me.
This works fine if I take the default api.vcap.me but it only works on that vm and is not accessable from other infrastructure which is pretty useless.
I have found the issue. There is a bug in vmc-0.3.21. Backed it down to vmc-0.3.18 and everything works now.
Whoof! How to open a bug against vmc?
When you have tried api.vcap.me, did you do this by just changing the endpoint address in config/cloud_controller.yml? If so, it may be worth checking to see if the setup did set the endpoint correctly in all the other configuration files, uaa.yml especially in this case as you are having issues with login.
I have always used the standard configuration (api.vcap.me) and then manually changed the endpoint in all the configuration files using sed, for example, from the config directory;
sed -i 's/\.vcap\.me/.newdomain.com/g' *.yml
Actually I initially installed with default api.vcap.me. Then i wiped out the guest an completely reinstalled with -D mydomain.com. I have subsquently installed another CF on a different guest with api.vcap.me for comparison.
Checked the config /home/cfadmin/cloudfoundry/.deployments/devbox/config/uaa.yml and there is no reverence to api.swampnet.com or api.vcap.me.
Just a quick note. I can successfully login from an external domain like emc.com but i cannot login on the local machine or a machine in the same subnet. Whoof!
I noticed that the controller had external uris false so I set them to ture but that made no difference. If I set them to true on the api.vcap.me instance will that allow me to push and app with an external uri?