While trying to start the docker image found here for restcomm's load balancer I get the following error.
2017-01-09 13:40:41,359 ERROR main org.mobicents.tools.sip.balancer.BalancerRunner.start(BalancerRunner.java:280) - An unexpected error occurred while starting the load balancer
java.lang.IllegalStateException: Can't create sip objects and lps due to[Index: 0, Size: 0]
at org.mobicents.tools.sip.balancer.SIPBalancerForwarder.start(SIPBalancerForwarder.java:792)
at org.mobicents.tools.sip.balancer.BalancerRunner.start(BalancerRunner.java:255)
at org.mobicents.tools.sip.balancer.BalancerRunner.start(BalancerRunner.java:346)
at org.mobicents.tools.sip.balancer.BalancerRunner.main(BalancerRunner.java:150)
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.remove(ArrayList.java:492)
at org.mobicents.tools.sip.balancer.SIPBalancerForwarder.start(SIPBalancerForwarder.java:357)
... 3 more
I ran the following Docker run command
docker run --name=lb -e LOG_LEVEL=all restcomm/load-balancer:latest
I tried looking up the code lines in the stack trace from the Load Balancer GitHub Repo but it appears the docker image does not contain the exact same code so the line numbers referenced don't match up.
Sorry for the delay in getting back to you. We have updated the image for the docker container. Please do the following.
docker rm lb
this will remove the previous container
next do a pull to get the new image
docker pull restcomm/load-balancer:latest
once the pull is completed try running the load balancer again as follows:
docker run --name=lb restcomm/load-balancer:latest
regards
Charles
Related
I'm using the latest helm chart to install Airflow 2.1.1 on k8s. I have a problem with s3 logging - I'm keep getting the error message:
*** Falling back to local log
*** Log file does not exist: /opt/airflow/logs/test_connection/send_slack_message/2021-07-16T08:48:27.337421+00:00/2.log
*** Fetching from: http://airflow2-worker-1.airflow2-worker.airflow2.svc.cluster.local:8793/log/test_connection/send_slack_message/2021-07-16T08:48:27.337421+00:00/2.log
in the task logs.
this is the relevant part from the chart values:
AIRFLOW__LOGGING__REMOTE_LOGGING: "True"
AIRFLOW__LOGGING__REMOTE_LOG_CONN_ID: "s3_logs"
AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER: "s3://.../temp/airflow_logs/stg"
The s3_logs connection is defined like this:
What am I missing?
Technical details:
chart - airflow-8.4.0
app version - 2.1.1
eks version - 1.17
So it seems that the S3 target folder should exist before writing the first log and that solves the issue. I hope that it will help someone in the future!
Setting: lots of mp3 records of customer support conversations somewhere in a db. Each mp3 record has 2 channels, one is customer rep, another is customer's voice.
I need to extract embedding(tensor) of a customer's voice. It's a 3 step process:
get the channel, cut 10 secs, convert to embedding. I have all 3 functions for each step.
embedding is a vector tensor:
"tensor([[0.6540e+00, 0.8760e+00, 0.898e+00,
0.8789e+00, 0.1000e+00, 5.3733e+00]])
Tested with postman. Get embedding function:
I want to build a rest api that connects on 1 endpoint to the db of mp3 files and outputs embedding to another db.
I need to clarify important feature about docker.
When i run "python server.py" flask makes it available on my local pc - 127.0.1.01/9090:
def get_embedding(file):
#some code
#app.route('/health')
def check():
return jsonify({'response':'OK!'})
#app.route('/get_embedding')
def show_embedding():
return get_embedding(file1)
if __name__ == '__main__':
app.run(debug=True, port=9090)
when i do it with docker - where goes the server and files? where does it become available online, can docker upload all the files to default docker cloud?
You need to write a Dockerfile to build your Docker image and after that, Run a container from that image exposing on the port and then you can access it machineIP:PORT
Below is the example
Dockerfile
#FROM tells Docker which image you base your image on (in the example, Python 3).
FROM python:3
#WORKDIR tells which directory container has to word
WORKDIR /usr/app
# COPY files from your host to the image working directory
COPY my_script.py .
#RUN tells Docker which additional commands to execute.
RUN pip install pystrich
CMD [ "python", "./my_script.py" ]
Ref:- https://docs.docker.com/engine/reference/builder/
And then build the image,
docker build -t server .
Ref:- https://docs.docker.com/engine/reference/commandline/build/
Once, Image is built start a container and expose the port through which you can access your application.
E.g.
docker run -p 9090:9090 server
-p Publish a container's port(s) to the host
And access your application on localhost:9090 or 127.0.0.1:9090 or machineIP:ExposePort
This is driving me nuts.
I'm setting up airflow in a cloud environment. I have one server running the scheduler and the webserver and one server as a celery worker, and I'm using airflow 1.8.0.
Running jobs works fine. What refuses to work is logging.
I've set up the correct path in airflow.cfg on both servers:
remote_base_log_folder = s3://my-bucket/airflow_logs/
remote_log_conn_id = s3_logging_conn
I've set up s3_logging_conn in the airflow UI, with the access key and the secret key as described here.
I checked the connection using
s3 = airflow.hooks.S3Hook('s3_logging_conn')
s3.load_string('test','test',bucket_name='my-bucket')
This works on both servers. So the connection is properly set up. Yet all I get whenever I run a task is
*** Log file isn't local.
*** Fetching here: http://*******
*** Failed to fetch log file from worker.
*** Reading remote logs...
Could not read logs from s3://my-bucket/airflow_logs/my-dag/my-task/2018-02-15T21:46:47.577537
I tried manually uploading the log following the expected conventions and the webserver still can't pick it up - so the problem is on both ends. I'm at a loss at what to do, everything I've read so far tells me this should be working. I'm close to just installing the 1.9.0 which I hear changes logging and see if I'm more lucky.
UPDATE: I made a clean install of Airflow 1.9 and followed the specific instructions here.
Webserver won't even start now with the following error:
airflow.exceptions.AirflowConfigException: section/key [core/remote_logging] not found in config
There is an explicit reference to this section in this config template.
So I tried removing it and just loading the S3 handler without checking first and I got the following error message instead:
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/lib64/python3.6/logging/config.py", line 384, in resolve:
self.importer(used)
ModuleNotFoundError: No module named
'airflow.utils.log.logging_mixin.RedirectStdHandler';
'airflow.utils.log.logging_mixin' is not a package
I get the feeling that this shouldn't be this hard.
Any help would be much appreciated, cheers
Solved:
upgraded to 1.9
ran the steps described in this comment
added
[core]
remote_logging = True
to airflow.cfg
ran
pip install --upgrade airflow[log]
Everything's working fine now.
I'm trying to get a Selenium script running on an Elastic Beanstalk server, to achieve this I am using pyvirtualdisplay package following this answer. However, for the Display driver to run xvfb also needs to be installed on the system. I'm getting this error message:
OSError=[Errno 2] No such file or directory: 'Xvfb'
Is there any way to manually install this on EB? I have also set up an EC2 server as suggested here, but the whole process seems unnecessary for this task.
You can create a file in .ebextensions/ like: .ebextensions/xvfb.config with the following content:
packages:
yum:
xorg-x11-server-Xvfb: []
I wanted to start an ubuntu container on a open shift origin. I have my local registry and pulling from it is successful. The container starts but immediately throws CrashLoopBackOff and stops. The ubuntu image that I have runs as root
Started container with docker id 28250a528e69
Created container with docker id 28250a528e69
Successfully pulled image "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86"
pulling image "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86"
Error syncing pod, skipping: failed to "StartContainer" for "ubuntu" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=ubuntu pod=ubuntu-2-suy6p_testproject(69af5cd9-5dff-11e6-940e-0800277bbed5)"
The container runs with restricted privilege. I dont know how to start the pod with a privileged mode, so edited my restricted mode as follows so that my image with root access will run
> NAME PRIV CAPS SELINUX RUNASUSER FSGROUP
> SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted true
> [] RunAsAny RunAsAny RunAsAny RunAsAny <none>
> false [configMap downwardAPI emptyDir persistentVolumeClaim
> secret]
But still I couldnt successfully start my container ?
There are two commands that helpful for crashloopbackoff debugging.
oc debug pod/your-pod-name will create a very similar pod and exec into it. You can look at the different options for launching it, some deal with SCC options. You can also use dc, rc, is, most things that can stamp out pods.
oc logs -p pod/your-pod-name will retrieve the logs from the last run of the pod, which may have useful information too.