How do I access my api that is deployed on a server? - api

app.use((req,res,next)=>{
res.setHeader('Access-Control-Allow-Origin','*');
})
I used this code in my express js backend.
Then deployed it on server and run my app using node index
But then I tried to access my api from localhost using this - public IPV4 address of ec2 (aws):3000/getData
But it didn't work. error : {"message":"connect ETIMEDOUT 172.31.2.86:5432"}.
What did I miss?

check AWS security group config. port 3000 is blocked by default

Related

Node App is not working correctly on Google App Engine

I have connected my node app to my Cloud SQL database and it is working perfectly locally, but when I deploy my node app to App Engine, the API no longer works. In the logs I can see the following:
(node:11) UnhandledPromiseRejectionWarning: SequelizeConnectionError: connect ETIMEDOUT
This does not happen when I run the app locally despite being connected to the same Google Cloud SQL DB.
I thought maybe it would not connect because of security restrictions, but on the SQL connections page it says the following:
Apps in this project: All authorized.
My node app is deployed within the same project, so that shouldn't be the problem.I have also whitelisted my home IP so that I could connect locally, and that was a success as mentioned before. Any help would be appreciated.
EDIT:
Here is my attempt to connect to the DB. It now fails locally & when I deploy. It works locally if I pass in the public IP to the "host."
const sequelize = new Sequelize('linkspot', 'kyle', 'password', {
host: '/cloudsql/linkspot:us-central1:linkspot-mysql',
dialect: 'mysql',
port: 3306,
pool: {
max: 5,
min: 1,
acquire: 30000,
idle: 10000
}
});
When you connect in your local computer the connection is made via the SQL Proxy, but when the app is deployed in app engine several steps are needed to configure app engine depending if its using private or public IP.
For instance private ip connections requires you to create a Serverless VPC Access connector in the same VPC network as your Cloud SQL instance, and public IP connections require you to use unix domain socket using the format: /cloudsql/INSTANCE_CONNECTION_NAME.
For the instructions on both cases check this doc

Cannot access ASP.NET Core Web API from Windows-based Amazon virtual machine

I've been searching Google and StackOverflow about this for several hours now, and nothing seems to work.
I created a Web API server in ASP.NET Core and verified it works locally.
I created a new EC2 VM to host the Web API. I copied all the binaries for the API up to EC2, and started the server from the command line.
I have made sure that the required EC2 security group exists and that the correct TCP port is open for Input in the security group.
I have added an appropriate firewall rule to Windows firewall.
NETSTAT -Q shows that the server is working and that the port is in LISTENING mode.
I get the public IP address for my EC2 VM and go back to my local machine.
I use POSTMAN to submit POST requests to the EC2-based server.
POSTMAN returns an error: "There was an error connecting to ...."
So I'm at a loss as to what to do next. Have I missed some EC2 configuration? Is there something else I have to do to the ASP.NET Core WebAPI code?

OpenShift v3 connect app with redis. Connection Refused

I have created a redis 3.2 application from the default image catalog.
I'm trying to connect a python app that runs inside the same project with the redis db.
This is what the Python application uses to connect to redis:
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_PASSWORD = os.environ.get('REDIS_PASSWORD') or 'test'
redis = aioredis.create_redis_pool(
(REDIS_HOST, int(REDIS_PORT)),
password=REDIS_PASSWORD,
minsize=5,
maxsize=10,
loop=loop,
)
The deployment fails with an ConnectionRefusedError: [Errno 111] Connection refused.
My guess is that I need to use another value for REDIS_HOST, but I couldn't figure what to use.
Does anyone know how to fix this?
After you deployed from the image catalog a number of objects will have been created for you. One of those objects is a service, which is used to load balance requests to the Pods it fronts. Service names for a project can be retrieved using the client tools via oc get svc.
This service name should be used to connect to your redis instance. If you deploy redis before your Python application, some environment variables should already be populated which can be used, for example REDIS_SERVICE_HOST and REDIS_SERVICE_PORT.
So from your application you can connect via the service ip or service name, where service name is redis then redis.StrictRedis(host='redis', port=6379, password='secret')
The redis password may have been generated for you. In that case it is retrievable from the redis secret which could also be mounted from your python app
Databases in general do not use standard HTTP, but custom TCP protocols. This is why in Openshift we need to connect directly to the service using Openshift's Service hostname or IP address (caution: only Service hostname is predictable), instead of the usual Route, and this applies also to Redis. Bypassing the Routes in Openshift is like bypassing a reverse proxy such as nginx and directly connecting to the db backend.
There is need to use env variables, because service hostnames are auto-generated by Openshift using this predictable pattern:
container_name.project_name.svc , e.g:
redis.db.svc
More info
"When a web application is made visible outside of the OpenShift cluster a route is created. This enables a user to use a URL to access the web application from a web browser. A route is usually used for web applications which use the HTTP protocol. A route cannot be used to expose a database, as they would typically use their own distinct protocol, and routes would not be able to work with the database protocol."
[https://blog.openshift.com/openshift-connecting-database-using-port-forwarding/ ]

Setting up redis cache for remote connections

I have just installed a Redis Cache on a windows server using the rdis package (https://github.com/MicrosoftArchive/redis).
It runs under localhost fine.
My problem is what do I need to do to connect to this cache instance from a remote connection?
I am trying to connect to the ip address of the server I get the below error. I get the same error using the server name.
What needs to be done to expose the cache?
modify the redis conf, set the protected mode to no and bind the public ip of your machine.

how to access my web app in apache-ubuntu with custom domain name in a LAN?

I have developed a web app using laravel & apache 2.4 in ubuntu 15.04 inside vmware. I have configured ip address of the ubuntu as static which is 192.168.1.250.
Within ubuntu i can access the web app from ip 127.0.0.1 or localhost. And from the networked devices, i can access it using the ubuntu's ip address 192.168.1.250.
Now, i want to access the web app using a domain name from the networked devices instead of IP address. I think i need to install and configure dns server in ubuntu along with apache. So, i installed BIND dns and tried to configure it but failed. So, if it can be done with BIND, then i was wondering HOW? If not, then what may be another way? Thank you !
You can create a tunnel to your local environment by using ngrok which will give you a temporary address (to keep the address static you have to use pro features a.k.a paid features)
Follow this steps:
Download ngrok and unzip ngrok
Open a cmd / terminal and navigate to ngrok location
Type the following command:
ngrok http {your_localhost_server_port_number}
It will create the tunnel but we need to point a virtual host to it so edit your local server virtual host and add an alias / server name like following:
NOTE: if you only have one app running on your local server this step is optional
*.ngrok.io
Now restart your local server to load our new configuration
Now you are able to see your localhost site online by using the ngrok provided url.
Enjoy!