Anyone success in config collinder for apprtc - webrtc

My question in github:
https://github.com/webrtc/apprtc/issues/615
I can't config apprtc for signal server, just call video ok via wifi but via mobile network has no luck.
Please view my config, I can't find any example for constands.py in anywhere.
Here is my config:
ICE_SERVER_OVERRIDE = [
{
"urls": [
"stun:stun.l.google.com:19302"
]
},
{
"urls": [
"turn:my_ip_address:3478?transport=udp"
],
"username": "my_account",
"credential": "password"
},
{
"urls": [
"turn:my_ip_address:3479?transport=udp"
],
"username": "my_account",
"credential": "password"
}
]
TURN_SERVER_OVERRIDE = [
{
"urls": "turn:my_ip_address:3478",
"username": "my_account",
"credential": "password"
},
{
"urls": "stun:stun.l.google.com:19302"
}
]
TURN_BASE_URL = 'http://my_url.com'
TURN_URL_TEMPLATE = '%s/turn?username=%s&key=%s'
CEOD_KEY = ''
ICE_SERVER_BASE_URL = 'http://my_url.com'
ICE_SERVER_URL_TEMPLATE = '%s/v1alpha/iceconfig?key=%s'
ICE_SERVER_API_KEY = os.environ.get('ICE_SERVER_API_KEY')
Dictionary keys in the collider instance info constant.
WSS_INSTANCE_HOST_KEY = 'my_ip_address:8443'
WSS_INSTANCE_NAME_KEY = 'wsserver-std'
WSS_INSTANCE_ZONE_KEY = 'us-central1-a'
WSS_INSTANCES = [{
WSS_INSTANCE_HOST_KEY: 'my_ip_address:8443',
WSS_INSTANCE_NAME_KEY: 'wsserver-std',
WSS_INSTANCE_ZONE_KEY: 'us-central1-a'
}, {
WSS_INSTANCE_HOST_KEY: 'apprtc-ws-2.webrtc.org:443',
WSS_INSTANCE_NAME_KEY: 'wsserver-std-2',
WSS_INSTANCE_ZONE_KEY: 'us-central1-f'
}]
WSS_HOST_PORT_PAIRS = [ins[WSS_INSTANCE_HOST_KEY]
When I run it, my apprtc return error:
WebSocket open error: WebSocket error.
So, I don't understand what keys mean:
WSS_INSTANCE_HOST_KEY: 'my_ip_address:8443',
WSS_INSTANCE_NAME_KEY: 'wsserver-std',
WSS_INSTANCE_ZONE_KEY: 'us-central1-a'
When I change to default in original code, it work but ONLY via Wifi, No mobile network working, I also run turnserver in port 3478 and collinder in 8443 with pem files.
So any one can tell me how to test collinder and turnserver config successfully for mobile connecting?

I found the error during two years ago to config apprtc:
Just config ICE servers like this:
ICE_SERVER_OVERRIDE = [
{
"urls": [
"stun:stun.l.google.com:19302"
]
},
{
"urls": [
"turn:my_ip_address:3478?transport=udp"
],
"username": "my_account",
"credential": "password"
},
{
"urls": [
"turn:my_ip_address:3479?transport=udp"
],
"username": "my_account",
"credential": "password"
}
]
ICE_SERVER_BASE_URL = 'http://my_url.com'
ICE_SERVER_URL_TEMPLATE = '%s/v1alpha/iceconfig?key=%s'
ICE_SERVER_API_KEY = os.environ.get('ICE_SERVER_API_KEY')
and in /etc/turnserver.conf
cert=/root/cert.pem
pkey=/root/key.pem
listening-port=3478
tls-listening-port=5349
listening-ip=my_ip_address
relay-ip=my_ip_address
external-ip=my_ip_address
realm=my_web_address
server-name=my_web_address
#lt-cred-mech
userdb=/etc/turnuserdb.conf
oauth
user=my_account:my_password
no-stdout-log
Reson of error is: when I config "lt-cred-mech" authentication, It was failed.
So, I change it to "oauth": It Worked.
test Turn Server (Collinder) in this website:
https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
# The result very fast (like sturn url of Google):
0.005 1 host 3868393361 udp 192.168.1.157 35353 126 | 30 | 255
0.006 1 host 891932622 udp xxxx:xxxx:12c7:xxxx:247e:xxxx:3c18:xxxx 51606 126 | 40 | 255
0.009 1 srflx 842163049 udp aa.bb.cc.dd 3341 100 | 30 | 255
0.062 1 relay 3031532034 udp my_turn_ip_address 62030 2 | 30 | 255
0.105 Done
0.109

Related

Error with IPFS COR

When trying to use IPFS from my localhost I am having trouble accessing the IPFS service. I tried setting my config to accept the localhost and all server stuff, but nothing seems to work.
The error:
Failed to load http://127.0.0.1:5001/api/v0/files/stat?arg=0x6db883c6f3b2824d26f3b2e9c30256b490d125b10a3942f49a1ac715dd2def89&stream-channels=true: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:63342' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
IPFS Config:
{
"API": {
"HTTPHeaders": {
"Access-Control-Allow-Origin": [
"*"
]
}
},
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Announce": [],
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"NoAnnounce": [],
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
},
"Bootstrap": [
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
"/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
"/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
"/ip6/2604:a880:1:20::203:d001/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip6/2400:6180:0:d0::151:6001/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip6/2604:a880:800:10::4a:5001/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd"
],
"Datastore": {
"BloomFilterSize": 0,
"GCPeriod": "1h",
"HashOnRead": false,
"Spec": {
"mounts": [
{
"child": {
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": true,
"type": "flatfs"
},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "measure"
},
{
"child": {
"compression": "none",
"path": "datastore",
"type": "levelds"
},
"mountpoint": "/",
"prefix": "leveldb.datastore",
"type": "measure"
}
],
"type": "mount"
},
"StorageGCWatermark": 90,
"StorageMax": "10GB"
},
"Discovery": {
"MDNS": {
"Enabled": true,
"Interval": 10
}
},
"Experimental": {
"FilestoreEnabled": false,
"Libp2pStreamMounting": false,
"ShardingEnabled": false
},
"Gateway": {
"HTTPHeaders": {
"Access-Control-Allow-Headers": [
"X-Requested-With",
"Range"
],
"Access-Control-Allow-Methods": [
"GET"
],
"Access-Control-Allow-Origin": [
"localhost:63342"
]
},
"PathPrefixes": [],
"RootRedirect": "",
"Writable": false
},
"Identity": {
"PeerID": "QmRgQdig4Z4QNEqs5kp45bmq6gTtWi2qpN2WFBX7hFsenm"
},
"Ipns": {
"RecordLifetime": "",
"RepublishPeriod": "",
"ResolveCacheSize": 128
},
"Mounts": {
"FuseAllowOther": false,
"IPFS": "/ipfs",
"IPNS": "/ipns"
},
"Reprovider": {
"Interval": "12h",
"Strategy": "all"
},
"Swarm": {
"AddrFilters": null,
"ConnMgr": {
"GracePeriod": "20s",
"HighWater": 900,
"LowWater": 600,
"Type": "basic"
},
"DisableBandwidthMetrics": false,
"DisableNatPortMap": false,
"DisableRelay": false,
"EnableRelayHop": false
}
}
Ben, try replacing 127.0.0.1 with localhost. go-ipfs whitelists localhost only. Also check https://github.com/ipfs/js-ipfs-api/#cors
my answer might come very late, however I am trying to solve some CORS issues with IPFS on my end; therefore I might have a solution for you:
by running:
# please update origin according to your setup...
origin=http://localhost:63342
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["'"$origin"'", "http://127.0.0.1:8080","http://localhost:3000", "http://127.0.0.1:48084", "https://gateway.ipfs.io", "https://webui.ipfs.io"]'
ipfs config API.HTTPHeaders.Access-Control-Allow-Origin
and restarting your ipfs daemon it might fix it
if the "fetch" button in the following linked page works : you are all set ! https://gateway.ipfs.io/ipfs/QmXkhGQNruk3XcGsidCzQbcNQ5a8oHWneHZXkPvWB26RbP/
This Command Works for me
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin
'["'"$origin"'", "http://127.0.0.1:8080","http://localhost:3000"]'
you can allow the request from multiple origins

Swagger UI and Docker Container Communication

I have a docker container running Swagger UI on port 80 and I have another API running in another container on port 32788
http://127.0.0.1:80/ >>> returns swagger UI
http://127.0.0.1:32788/swagger.json >>> returns swagger API def
But when I put the json file into the Swagger UI field and hit explore, it says
NetworkError when attempting to fetch resource. http://127.0.0.1:32788/swagger.json
Any ideas on how to solve this. The docs say that they should automatically be connected to the bridge network.
Below is the result of the network inspection
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "4b5cc1526055297df70dc9adc4959fcee93384c412fbf90500c041b5b83ed43a",
"Created": "2018-01-17T03:48:39.2325461Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"257a15af9ab9b25c6c5622fb0ebe599e5703b2ca5f2e4eaa97a8745a21e7f9a9": {
"Name": "pensive_neumann",
"EndpointID": "22be4b781f75e071bcb0098b917b81b16ca493e9080848188dd7a811c27070ec",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"30de904a599a19075d5e20ef5d974a11be9d7e58a68d984a24f4af9e22c4d92b": {
"Name": "naughty_mirzakhani",
"EndpointID": "f704b3e103a82ca5c56d5955ac27845d8951cfe13f0bc3e1ccc8717ea9c28d39",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Edit to explain how started each:
The API is part of Azure Machine Learning so its hard to say how it gets started exactly (unless there is some command I can run in docker):
az ml service create realtime
Swagger UI was started as follows:
docker run -p 80:8080 swaggerapi/swagger-ui

aws ec2 can not secureshell from one subnet/vpc to another subnet/vpc

I have an aws ec2 machine (172.18.18.133) on subnetwork with CidrBlock 172.18.18.0/23.
Have secureshell ingress ip open for 10.0.0.0/8 and 172.23.0.0/18 (ignore "0.0.0.0/0" in firewall as I'm playing with it because specific source CidrBlock did not work)
aws ec2 describe-security-groups --group-ids sg-659fd31p --profile aws-federated --region us-west-2
{
"SecurityGroups": [
{
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"PrefixListIds": [],
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"UserIdGroupPairs": [],
"Ipv6Ranges": []
}
],
"Description": "VPC Security Group",
"Tags": [
{
"Value": "restapi-dev",
"Key": "elasticbeanstalk:environment-name"
},
{
"Value": "awseb-e-8gx8kmq9dj-stack",
"Key": "aws:cloudformation:stack-name"
},
{
"Value": "AWSEBSecurityGroup",
"Key": "aws:cloudformation:logical-id"
},
{
"Value": "restapi-dev",
"Key": "Name"
},
{
"Value": "arn:aws:cloudformation:us-west-2:033814027302:stack/awseb-e-8gx8kmq9dj-stack/605642e0-3eb8-11e7-a388-503ac9ec2499",
"Key": "aws:cloudformation:stack-id"
},
{
"Value": "e-8gx8kmq9dj",
"Key": "elasticbeanstalk:environment-id"
}
],
"IpPermissions": [
{
"PrefixListIds": [],
"FromPort": 80,
"IpRanges": [],
"ToPort": 80,
"IpProtocol": "tcp",
"UserIdGroupPairs": [
{
"UserId": "033814027302",
"GroupId": "sg-ee81cd95"
}
],
"Ipv6Ranges": []
},
{
"PrefixListIds": [],
"FromPort": 22,
"IpRanges": [
{
"CidrIp": "10.0.0.0/8"
},
{
"CidrIp": "0.0.0.0/0"
},
{
"CidrIp": "172.23.0.0/18"
}
],
"ToPort": 22,
"IpProtocol": "tcp",
"UserIdGroupPairs": [],
"Ipv6Ranges": []
}
],
"GroupName": "awseb-e-8gx8kmq9dj-stack-AWSEBSecurityGroup-4J0FPNXL840U",
"VpcId": "vpc-5374e434",
"OwnerId": "033814027302",
"GroupId": "sg-659fd31p"
}
]
}
I want to secureshell connect to above machine from another machine which is on different VPC and CidrBlock 172.23.0.0/18.
But I can not connect from ec2 machine with Ip address 172.23.38.167
to above target machine.
[ec2-user#ip-172-23-38-167 ~]$ ssh -v -i /home/ec2-user/.ssh/staging-api.pem ec2-user#172.18.18.133
OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 56: Applying options for *
debug1: Connecting to 172.18.18.133 [172.18.18.133] port 22.
debug1: connect to address 172.18.18.133 port 22: Connection timed out
ssh: connect to host 172.18.18.133 port 22: Connection timed out
I do have .pem file ~/.ssh
[ec2-user#ip-172-23-38-167 ~]$ ll ~/.ssh/
total 20
-rw-------. 1 ec2-user ec2-user 1675 May 24 02:45 staging-api.pem
-rw-------. 1 ec2-user ec2-user 398 Apr 8 21:29 authorized_keys
-rw-------. 1 root root 1766 Apr 23 20:06 gitkey_rsa
-rw-r--r--. 1 root root 386 Apr 23 20:06 gitkey_rsa.pub
-rw-r--r--. 1 ec2-user ec2-user 413 May 20 21:02 known_hosts
Note: I have few ec2 VMs in the same subnet and I can do secureshell between them.
Target/Source VPC config
Not sure but the problem could be with the routing table on the VPC.
The routing table config of the VPC with target machine which I want secureshell into is below. Don't know the purpose of all these 6/7 routes but understand NAT gateway to enable VMs in a private subnet to connect to the Internet or other AWS services.
$ aws ec2 describe-route-tables --route-table-ids rtb-9e0337f9 --profile aws-federated --region us-west-2
{
"RouteTables": [
{
"Associations": [
{
"SubnetId": "subnet-a1ec23e8",
"RouteTableAssociationId": "rtbassoc-d8ffbbbe",
"Main": false,
"RouteTableId": "rtb-9e0337f9"
}
],
"RouteTableId": "rtb-9e0337f9",
"VpcId": "vpc-5374e434",
"PropagatingVgws": [],
"Tags": [
{
"Value": "fff000",
"Key": "Permissions"
},
{
"Value": "us-west-2b",
"Key": "PhysicalLocation"
},
{
"Value": "InternalSubnet01AZ1RouteTable",
"Key": "aws:cloudformation:logical-id"
},
{
"Value": "fff000-vpc-nonprod-prayagupd-vpc-01-VPCTeamNestedStackTemplate-1EH2K9THBASPW",
"Key": "aws:cloudformation:stack-name"
},
{
"Value": "rtb_nonprod-prayagupd-vpc-01_internal_az1",
"Key": "Name"
},
{
"Value": "arn:aws:cloudformation:us-west-2:033814027302:stack/fff000-vpc-nonprod-prayagupd-vpc-01-VPCTeamNestedStackTemplate-1EH2K9THBASPW/f7e06c10-ee60-11e6-92e6-503a90a9c435",
"Key": "aws:cloudformation:stack-id"
},
{
"Value": "internal",
"Key": "Designation"
}
],
"Routes": [
{
"Origin": "CreateRoute",
"DestinationCidrBlock": "172.16.2.0/23",
"State": "active",
"VpcPeeringConnectionId": "pcx-c67fffaf"
},
{
"Origin": "CreateRoute",
"DestinationCidrBlock": "172.16.4.0/23",
"State": "active",
"VpcPeeringConnectionId": "pcx-c67fffaf"
},
{
"Origin": "CreateRoute",
"DestinationCidrBlock": "172.16.122.0/23",
"State": "active",
"VpcPeeringConnectionId": "pcx-f0f76299"
},
{
"Origin": "CreateRoute",
"DestinationCidrBlock": "172.16.104.0/21",
"State": "active",
"VpcPeeringConnectionId": "pcx-7483081d"
},
{
"GatewayId": "local",
"DestinationCidrBlock": "172.18.16.0/21",
"State": "active",
"Origin": "CreateRouteTable"
},
{
"GatewayId": "vgw-cb23fbd5",
"DestinationCidrBlock": "192.168.0.0/16",
"State": "active",
"Origin": "CreateRoute"
},
{
"GatewayId": "vgw-cb23fbd5",
"DestinationCidrBlock": "10.0.0.0/8",
"State": "active",
"Origin": "CreateRoute"
},
{
"Origin": "CreateRoute",
"DestinationCidrBlock": "0.0.0.0/0",
"NatGatewayId": "nat-0dbd1eca0fe1fcb8e",
"State": "active"
}
]
}
]
}
For Source VPC, Similar route config as target VPC,
{
"Origin": "CreateRoute",
"DestinationCidrBlock": "0.0.0.0/0",
"NatGatewayId": "nat-0b6d136887df6f792",
"State": "active"
}
NAT config for source VPC is
$ aws ec2 describe-nat-gateways --nat-gateway-id nat-0b6d136887df6f792 --profile aws-federated --region us-west-2
{
"NatGateways": [
{
"NatGatewayAddresses": [
{
"PublicIp": "34.208.30.85",
"NetworkInterfaceId": "eni-43d8c630",
"AllocationId": "eipalloc-d47488b2",
"PrivateIp": "172.23.248.220"
}
],
"VpcId": "vpc-a77a82c2",
"State": "available",
"NatGatewayId": "nat-0b6d136887df6f792",
"SubnetId": "subnet-b267b2d7",
"CreateTime": "2017-03-30T18:16:05.767Z"
}
]
}
Resource
Possible reasons for timeout when trying to access EC2 instance

laravel echo server , redis

i have a problem with remote connections.
my vhost is : redis.test
i added this on blade file:
<script src="//redis.test:6001/socket.io/socket.io.js"></script>
.env file
BROADCAST_DRIVER=redis
REDIS_HOST=redis.test
REDIS_PASSWORD=null
REDIS_PORT=6379
echo configuration
import Echo from "laravel-echo"
window.Echo = new Echo({
broadcaster: 'socket.io',
host: 'http://redis.test:6001'
});
laravel-exho-server.json
{
"authHost": "http://redis.test",
"authEndpoint": "/broadcasting/auth",
"clients": [
{
"appId": "f27485125ac2627f",
"key": "6328e672f42cbf4cba1de3da215ec41a"
}
],
"database": "redis",
"databaseConfig": {
"redis": {
"port": "6379",
"host": "redis.test"
},
"sqlite": {
"databasePath": "/database/laravel-echo-server.sqlite"
}
},
"devMode": true,
"host": "redis.test",
"port": "6001",
"protocol": "http",
"socketio": {},
"sslCertPath": "",
"sslKeyPath": ""
}
it works when i try to broadcast with a local connection (2 browsers - same pc), but when i try to send a "message" from other pc on lan network (192.168.1.50) i have this error
GET: http://redis.test:6001/socket.io/socket.io.js net::err_connection_refused
[vue_warn] error in created hook
how ca i resolve this?
It may be a firewall issue as I can see, try to open redis port in the firewall

How to correctly deploy multi meteor instances WITH SSL on one digitalocean droplet using mup?

my mup.json config for first meteor instance:
{
"servers": [
{
"host": "111.222.333.444",
"username": "root",
"password": "mypass"
}
],
"setupMongo": true,
"setupNode": true,
"nodeVersion": "0.10.40",
"setupPhantom": false,
"enableUploadProgressBar": true,
"appName": "myapp1",
"app": "../myapp1",
"env": {
"PORT": 3001,
"ROOT_URL": "https://my.domain.com"
},
"ssl": {
"pem": "./ssl.pem"
},
"deployCheckWaitTime": 15
}
So after deployment I want to get access to this instance by https://my.domain.com:3001. Then with similar configuration I want to deploy second instance to same droplet and get access to it by https://my.domain.com:3002.
The problem is that after deployment accessing by https taking ERR_CONNECTION_CLOSED, but accessing by http is OK.
How can I make it working?
Finally, I did it.
Firstly, I used mupx. But there I had troubles too. Later I found that my fault was writing same ports for different apps or protocols. So, there is working configurations of first and second apps:
{
"servers": [{
"host": "111.222.333.444",
"username": "root",
"password": "mypass",
"env": {}
}],
"setupMongo": true,
"appName": "myapp1",
"app": "../myapp1",
"env": {
"PORT": 8000,
"ROOT_URL": "http://my.domain.com"
},
"deployCheckWaitTime": 15,
"enableUploadProgressBar": true,
"ssl": {
"certificate": "../ssl/bundle.crt",
"key": "../ssl/private.key",
"port": 8001
}
}
{
"servers": [{
"host": "111.222.333.444",
"username": "root",
"password": "mypass",
"env": {}
}],
"setupMongo": true,
"appName": "myapp2",
"app": "../myapp2",
"env": {
"PORT": 8100,
"ROOT_URL": "http://my.domain.com"
},
"deployCheckWaitTime": 15,
"enableUploadProgressBar": true,
"ssl": {
"certificate": "../ssl/bundle.crt",
"key": "../ssl/private.key",
"port": 8101
}
}
bundle.crt and private.key are common for all apps.
Don't forget to use mupx.
So after
mupx setup
mupx deploy
We can get access for first app by
http://my.domain.com:8000
https://my.domain.com:8001
And for second app by
http://my.domain.com:8100
https://my.domain.com:8101
EDIT: accessing by http is not working. I don't know why, maybe it just for my configuration. But this feature I don't need, I need only https. So if you know how to fix, please, write.
EDIT2: it's alright, http access works. The reason was Chrome browser, it always redirects my domain from http to https. After cleaning browser history it do all good.