I have implemented following 2 approaches.
I have done jitsi setup in which my jitsi, jvb and jibri running on seperate server.
I have done jitsi setup using docker in which I am using different containers to run jibri servers.
both set up are working fine but I am unable to autoscale jibri in any of them. I have searched alot over this topics, didn't find any reliable solution for this.
Please help if anyone has idea about how to autoscale jibri servers for multiple video recordings.
you can use jibri with pulseaudio and use hpa for scailing.
please see this repo
You can try Jibri On Demand service. It is a SaaS so you dont need to worry autoscaling.
Configure AWS S3 IAM accesskey and secretkey, the finalize script will upload recordings to your bucket.
The price is ok if i only want to handle peaks.
My configuration is 1 dedicated server host by my own, others handled by JibriOnDemand.
Related
A Weblogic server got hacked and the problem is now removed.
I am looking through the infected VM's now in a sandbox and want to see what if any data was accessed on the application servers.
the app servers were getting hammered with ssh requests and so we identified the infected VM's as the web logic VMS, we did not have http logging on. Is there any way to Identify if any PII was Compromised?
Looked through secure logs on weblogic as well as looked through the PIA logs
I am not sure how to identify what if any data was accessed
I would like to find out what went out of our network and info or data
what should I be looking for
is there anything I can learn from looking at the weblogic servers running on red hat?
I would want to believe that SSH was not the only service being hammered, and that was a large attempt to make eyes be on Auth logging whilst an attempt on other services is made.
Do you have a Time frame that you are working with?
Have the OS logs been checked for that time frame?
.bash_history been checked? env variables? /etc/pass* for added users? aliases? reverse shells open on the network connections? New users created on services running on that particular host?
Was WebLogic the only service running on this publicly available host?
What other services and ports were available?
Was this due to an older version of Weblogic or another service, application, plugin?
Create yourself an excel spreadsheet and start a timeline.
Look at all the OS level logging possible and start to make note of anything that looks suspicious, to then follow that breadcrumb to exhaustion.
I've been searching the help forums and the only documentation I've seen on how to do this is to create the gateway and then spin up vm's to run your application. We are using docker containers and I'm not sure of how to proceed. Additionally, is it possible to block off all access to applications behind a gateway and only have them be accessible through the gateway? Thanks a lot.
I am new to Apache Airflow and so far, I have been able to work my way through problems I have encountered.
I have hit a wall now. I need to transfer files to a remote server via sftp. I have not had any luck doing this. So far, I have gotten S3 and Postgres/Redshift connections via their respective hooks to work in various DAGs. I have been able to use the FTPHook with success testing on my local FTP server, but have not been able to figure out how to use SFTP to connect to a remote host.
I am able to connect to the remote host via SFTP with FileZilla, so I know my credentials are correct.
Through Google searching I have found the SFTPOperator, but am not able to figure out how to use it. I have also found FTPSHook, but still I have not been able to get it to work.
I keep getting the error nodename nor servname provided, or not known or a general Operation timed out in my Airflow logs.
Can someone point me in the right direction? Should I be using the FTPSHook with SSH or FTP Airflow Conn Type? Or do I need to utilize the SFTPOperator? I am also confused as to how I am supposed to setup the credentials in my Airflow connections. Do I use the SSH profile or FTP?
If I can provide any more additional info that may help, please let me know.
Cheers!
SFTPOperator is using ssh_hook underhood to open sftp transport channel that serves as a basis for file transfer. You can either configure ssh_hook by yourself or provide connection id via ssh_conn_id.
op = SFTPOperator(
task_id="test_sftp",
ssh_conn_id="my_ssh_connection",
local_filepath="",
remote_filepath="",
operation=SFTPOperation.PUT,
dag=dag
)
Pretty new to docker / docker-machine / docker-compose and use this for a meteor app that needs to connect to a queue and a few other services. I need to setup SSL on localhost as we're using the getUserMedia api (which chrome is deprecating on insecure connections).
I believe I need to create a self signed certificate, but not sure what to do with it after that. Do I set it up on my local machine? or do I set this up in the docker container?
Note that meteor is actually running in development mode on its container on local
Any definitive help getting started on this would be great.
EDIT: while the similar question noted in the comments seems to solve the problem for meteor specifically, I'm interested more importantly in the context of docker and OSX, While my actual problem is with a meteor app currently, I would like to find a solution thats not meteor dependant, but is considerate of the user case.
Hi I'm currently working on a side project. In this project I'll have a central server that will need to connect to several remote docker daemons. My problem is with authentication.
Given that the project will be hosted on Digitalocean, my first thought suggested that I'll accept only connections from the private networking interface. The problem is that that interface is accessible by all other servers in the same datacenter.
Second thought is to allow only requests from the central server using the DOCKER_HOST config, the problem is that if I understand correctly the if the private IP of the centeral server get known, the IP can be spoofed.
Third thought is to enable TLS ( https://docs.docker.com/articles/https/ ), I've never dealt with those things before and the tutorial is unclear for me, I lack the knowledge of the terminologies and it's being used heavily.
So basically the problem is that I have a central client and multiple remote docker hosts, what is the best way to connect to them? Thank you.
EDIT: I managed to solve the problem using HTTP authentication by running nginx as a proxy in front of the docker daemon.
My understand is you are trying to build a docker cluster, which can manage all nodes from one single central server.
this is very likely docker's Docker Swarm project, from their doc, they give some simple idea how this is work:
open a TCP port on each node for communication with the swarm manager
install Docker on each node
create and manage TLS certificates to secure your swarm
Sorry this should post as a comment but I do not have enough rep to do that.