How to install schema registry - confluent-schema-registry

I am looking options to install confluent schema registry, is it possible to download and install registry alone and make it work with existing kafka setup ?
Thanks

Assuming you have Zookeeper/Kafka running already, you can easily run Confulent Schema Registry using Docker with running the following command:
docker run -p 8081:8081 -e \
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=host.docker.internal:2181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:5.3.2
parameters:
-p 8081:8081 - will open the port 8081 between the container to your machine
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL - is your Zookeeper host and port, I'm using host.docker.internal to resolve local machine that is hosting Zookeeper (outside of the container)
SCHEMA_REGISTRY_HOST_NAME - The hostname advertised in Zookeeper. This is required if if you are running Schema Registry with multiple nodes. Hostname is required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment.
SCHEMA_REGISTRY_LISTENERS - the Schema Registry host and port number to open
SCHEMA_REGISTRY_DEBUG Run in debug mode
note: the script was using the version 5.3.2, make sure this version is aligned with your Kafka version.

Yes you can use your existing Kafka setup, just match to the compatible version of Confluent Platform. Here are the docs on getting started
https://docs.confluent.io/current/schema-registry/docs/intro.html#installation
tl;dr download the platform to pull out the pieces you need or get the docker image and point it at your Kafka cluster.

Related

How are the --network options available in podman?

I am running a virtual environment on CentOS with podman.
When I used the --net option of the podman run command, I get an error.
[user#server ~]$ podman run --net slirp4netns:port_handler=slirp4netns -p 1080:80 -d --name web nginx
Error: cannot join CNI networks if running rootless: invalid argument
Is this option unavailable?
Or is there a problem with the way the options are specified?
Please tell me solution.
I used this site as a reference for the command.
This is the configuration of the server.
[user#server ~]$ cat /etc/redhat-release
CentOS Linux release 8.2.2004 (Core)
[user#server ~]$ podman -v
podman version 2.0.6
The port_handler option requires Podman >= 2.1.0, which isn't released at this moment: https://github.com/containers/podman/commit/d86bae2a01cb855d5964a2a3fbdd41afe68d62c8
You can use that option if you compile Podman from its master branch.
I find this link quite helpful to see rootless communication :
https://www.redhat.com/sysadmin/container-networking-podman
https://podman.io/getting-started/network
I am not sure if you have seen this link before or even if it is helpful to you at this instance. But, in view of helping others out, I think the blog post quotes the following helpful statements:
Note: All podman network commands are for rootfull containers only.
Technically, the container itself does not have an IP address, because without root privileges, network device association cannot be achieved
When using Podman as a rootless user, the network is setup automatically. The container itself does not have an IP Address, because without root privileges, network association is not allowed. You will also see some other limitations.

How can I develop in docker container with intellij?

I know intellij has a docker container plugin, however it doesn't seem to allow me to develop inside the container itself. The idea is simple, I don't want to configure my host to have the correct environment tools. I'd rather just a docker container setup and then use intellij to find libs, functionality and such with in the container itself.
This would be incredibly helpful for c++, java, and scala dev. Also it would be useful debugging as well.
So is it possible to develop within a docker container with intellij?
So you just want to work within a container just as you would within a full-blown VM, right? Then you should just run a container, attach a display (to run IDEA) and start configuring your development environment.
For the display part I'd test some answers given in Can you run GUI apps in a docker container?. There are some very cool answers in this topic showing various approaches to running GUI apps within a container.
Shouldn't the approach be rather:
Have local repository and local IDE. In the repository have docker file and eventually docker-compose.yml, which spins up environment required to run project.
Mount your local drive with sources into docker (volumes), so changes done in your local folder are reflected in docker, similar in other direction.
Please look at this example for Intellij IDEA CI and JDK8 based on Alpine Linux (taken here
https://raw.githubusercontent.com/shaharv/docker/master/alpine/dev/Dockerfile)
# Alpine 3.8 C++/Java Developer Image
#
# For IntelliJ and GUI (X11), run the image with:
# $ XSOCK=/tmp/.X11-unix && sudo docker run -i -v $XSOCK:$XSOCK -e DISPLAY -u developer -t [image-name]
#
# Then run IntelliJ with:
# /idea-IC-191.6707.61/bin/idea.sh
FROM alpine:3.8
ENV LANG C.UTF-8
RUN set -ex && \
apk add --no-cache --update \
# basic packages
bash bash-completion coreutils file grep openssl openssh nano sudo tar xz \
# debug tools
gdb musl-dbg strace \
# docs and man
bash-doc man man-pages less less-doc \
# GUI fonts
font-noto \
# user utils
shadow
RUN set -ex && \
apk add --no-cache --update \
# C++ build tools
cmake g++ git linux-headers libpthread-stubs make
RUN set -ex && \
apk add --no-cache --update \
# Java tools
gradle openjdk8 openjdk8-dbg
# Install IntelliJ Community
RUN set -ex && \
wget https://download-cf.jetbrains.com/idea/ideaIC-2019.1.1-no-jbr.tar.gz && \
tar -xf ideaIC-2019.1.1-no-jbr.tar.gz && \
rm ideaIC-2019.1.1-no-jbr.tar.gz
# Create a new user with no password
ENV USERNAME developer
RUN set -ex && \
useradd --create-home --key MAIL_DIR=/dev/null --shell /bin/bash $USERNAME && \
passwd -d $USERNAME
# Set additional environment variables
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV JDK_HOME /usr/lib/jvm/java-1.8-openjdk
ENV JAVA_EXE /usr/lib/jvm/java-1.8-openjdk/bin/java
There is a better way to do this now with Jetbrains Gateway. Just make sure you have OpenSSH server installed (latest Ubuntu containers have this already installed) in the container that you initially ran with exposed ports, i.e. -p 220:22 (I like 220) and the SSH service running, i.e. service ssh start, after modifying the /etc/ssh/sshd_config to enable root login and password authentication then service ssh restart. Make sure you set a password for the root user, i.e. passwd root, (or go through other steps to setup a new user). Then all you need to do is open Jetbrains Gateway, and SSH to the container with the fields set thus: user=root, host=localhost, and port=220 (or whatever you chose); note, you will also need to specify a project location, which in my use case is a Java application repository root directory -- this means you will need to have Java and Maven or whatever other tools installed in the container at some point, but doesn't affect ability to connect. Assuming you connect with no issues you will see activity whereby Gateway installs an IDE backend inside the container (takes about 10 minutes) and then starts up a IDE client which is a light version of IntelliJ (or whatever other IDE version you selected) that is honestly a bit buggy at time of writing. But it works and has unblocked some of my colleagues stuck with Windows machines and not many options to upgrade to Macs in the current chip shortage environment. Note that any time you restart the container you also need to restart the SSH service unless you script it to automatically start up when the container does.

Upgrade Redis cluster Ubuntu

I have installed redis cluster 3.0.0. But Want to upgrade it to 3.0.7. Can somebody tell me the steps to do it?
I don't want to loose any data. And don't want any downtime either.
Steps I did when upgrading from 2.9.101 to 3.0 release. I hope it will do for upgrading to 3.0.7 too.
Compile 3.0.7 from the source and start several instances with cluster enabled.
Let the 3.0.7 instances replicate the 3.0.0 instances as slave
Connect to each 3.0.7 instance and do a manual failover, then the 3.0.0 masters would become slaves after several seconds.
Wait for your application to connect to the new masters; also check the configuration files, and modify the entries to the new masters on your need
Remove those slaves
UPDATE : Docker approach
As it's probably unable to replacing the binary executable while the process is still alive, you could do it by run some Redis in docker.
First you should install docker on your machine and pull the Redis image, or pull a basic OS image and manually build Redis in it, whatever
Based on this image, you are supposed to
copy your current redis.conf into it
make sure the dir exists in the image (cluster-config-file could be the same for all the containers as they are saved individually in their own fs)
make sure the directory for logfile exists and is not the same as dir (we will later map this directory to the host)
leave port logfile anything you like, as they are specified when a container is started
commit the image as redis-3.0.7
Now launch a containerized Redis. I suppose your logfile is located in /var/log/redis/, this Redis binds :8000, and your config file in the image is /etc/redis/redis.conf
docker run -d --net=host -v /var/log/redis:/var/log/redis \
-p 8000:8000 -t redis-3.0.7 \
/usr/bin/redis-server /etc/redis/redis.conf \
--port 8000 \
--logfile /var/log/redis/redis_8000.log
Now you have a Redis 3.0.7 instance, and are ready to finish the rest steps in the previous part.

docker run cannot find name flag argument

I have recently setup a Rstudio application on Google compute container engine using Docker and the Rocker/rstudio package. Now I want to start my saved container with a name using the following ssh command line:
sudo docker -d -p 8787:8787 --name samplename user/laatste
which returns the following error
flag provided but not defined: --name
I have tried with and without quotes, equal signs, double and single hyphens, before, between and after the other flags and arguments, but the same error keeps returning.
version information:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef
The reason I want to name the container is that I want to run standard (static) startup and shutdown scripts with the Google compute instance to automatically save and load changes made in R. The container name is used for identifying the container to be saved. Any other solution for this is also very welcome.
I guess you wanted to do:
sudo docker run -d -p 8787:8787 --name samplename user/laatste
You forgot to specify command (run) here.

Running Redis on Travis CI

I just included a Redis Store in my Express application and got it to work.
I wanted to include this Redis Store in Travis CI for my code to keep working there. I read in the Travis Documentation that it is possible to start Redis, with the factory settings.
In my project, I don't use the factory settings, I wrote my own redis.conf file which specifies the port and the password.
So I added the following line to my .travis.yml file:
services:
- redis-server --port 6380 --requirepass 'secret'
But this returns the following on Travis CI:
$ sudo service redis-server\ --port\ 6380\ --requirepass\ \'secret\' start
redis-server --port 6380 --requirepass 'secret': unrecognized service
Is there any way to fix this?
If you want to customize the option for Redis on Travis CI, I'd suggest not using the services section, but rather do this:
before_script: sudo redis-server /etc/redis/redis.conf --port 6380 --requirepass 'secret'
The services section runs services using their init/upstart scripts, which may not support the options you've added in there. The command is also escaped for security reasons, hence the documentation only hinting that you can list normal service names in that section.