Assign variable within kubernetes yaml job - awk

I would like to run a command within the yaml file for kubernetes:
Here is the part of the yaml file that i use
The idea is to calculate a precent value based on mapped and unmapped values. mapped and unmapped are set properly but the percent line fails
I think the problem comes from the single quotes in the BEGIN statement of the awk command which i guess need to escape ???
If mapped=8 and unmapped=7992
Then percent is (8/(8+7992)*100) = 0.1%
command: ["/bin/sh","-c"]
args: ['
...
echo "Executing command" &&
map=${grep -c "^#" outfile.mapped.fq} &&
unmap=${grep -c "^#" outfile.unmapped.fq} &&
percent=$(awk -v CONVFMT="%.10g" -v map="$map" -v unmap="$unmap" "BEGIN { print ((map/(unmap+map))*100)}") &&
echo "finished"
']

Thanks to the community comments: Ed Morton & david
For those files with data, please create configmap:
outfile.mapped.fq
outfile.unmapped.fq
kubectl create configmap config-volume --from-file=/path_to_directory_with_files/
Create pod:
apiVersion: v1
kind: Pod
metadata:
name: awk-ubu
spec:
containers:
- name: awk-ubuntu
image: ubuntu
workingDir: /test
command: [ "/bin/sh", "-c" ]
args:
- echo Executing_command;
map=$(grep -c "^#" outfile.mapped.fq);
unmap=$(grep -c "^#" outfile.unmapped.fq);
percent=$(awk -v CONVFMT="%.10g" -v map="$map" -v unmap="$unmap" "BEGIN { print ((map/(unmap+map))*100)}");
echo $percent;
echo Finished;
volumeMounts:
- name: special-config
mountPath: /test
volumes:
- name: special-config
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: config-volume
restartPolicy: Never
Once completed verify the result:
kubectl logs awk-ubu
Executing_command
53.3333
Finished

Related

Drone template not triggering build

Following is how our.drone.yml looks like (and template also listed below) this an example configuration very much how we want in our production. The reason we are using a template is that our staging and production have similar configurations with values different in them(hence circuit template). And we wanted to remove duplication using the template circuit.yaml.
But currently, we are unable to do so df I don’t define the test.yaml(template) and have test step imported without template (and have the circuit template define to avoid the duplicate declaration of staging and production build) the drone build fails with
"template converter: template name given not found
If I define the test step as a template. I see the test step working but on creating the tag I see the following error
{"commit":"28ac7ad3a01728bd1e9ec2992fee36fae4b7c117","event":"tag","level":"info","msg":"trigger: skipping build, no matching pipelines","pipeline":"test","ref":"refs/tags/v1.4.0","repo":"meetme2meat/drone-example","time":"2022-01-07T19:16:15+05:30"}
---
kind: template
load: test.yaml
data:
commands:
- echo "machine github.com login $${GITHUB_LOGIN} password $${GITHUB_PASSWORD}" > /root/.netrc
- chmod 600 /root/.netrc
- go clean -testcache
- echo "Running test"
- go test -race ./...
---
kind: template
load: circuit.yaml
data:
deploy: deploy
create_tags:
commands:
- echo "Deploying version $DRONE_SEMVER"
- echo -n "$DRONE_SEMVER,latest" > .tags
backend_image:
version: ${DRONE_SEMVER}
tags:
- '${DRONE_SEMVER}'
- latest
And the template is below
test.yaml
kind: pipeline
type: docker
name: test
steps:
- name: test
image: golang:latest
environment:
GITHUB_LOGIN:
from_secret: github_username
GITHUB_PASSWORD:
from_secret: github_token
commands:
{{range .input.commands }}
- {{ . }}
{{end}}
volumes:
- name: deps
path: /go
- name: build
image: golang:alpine
commands:
- go build -v -o out .
volumes:
- name: deps
path: /go
volumes:
- name: deps
temp: {}
trigger:
branch:
- main
event:
- push
- pull_request
circuit.yaml
kind: pipeline
type: docker
name: {{ .input.deploy }}
steps:
- name: create-tags
image: alpine
commands:
{{range .input.create_tags.commands }}
- {{ . }}
{{end}}
- name: build
image: plugins/docker
environment:
GITHUB_LOGIN:
from_secret: github_username
GITHUB_PASSWORD:
from_secret: github_token
VERSION: {{ .input.backend_image.version }}
SERVICE: circuits
settings:
auto_tag: false
repo: ghcr.io/meetme2meat/drone-ci-example
registry: ghcr.io

RabbitMQ in Kubernetes - Create User as part of Statefulset deployment kind

I am new to the Kubernetes and learning by experimenting. I have created RabbitMQ statefulset and it's working. However, the issue I am facing is the way I use it's admin portal.
By default RabbitMQ provides the guest/guest credential but that works only with localhsot. It gives me a thought that I supposed to have another user for admin as well as for my connection string at API side to access RabbitMQ. (currently in API side also I use guest:guest#.... as bad practice)
I like to change but I don't know how. I can manually login to the RabbitMQ admin portal (after deployment and using guest:guest credential) can create new user. But I thought of automating that as part of Kubernetes Statefulset deployment.
I have tried to add post lifecycle hook of kubernetes but that did not work well. I have following items:
rabbitmq-configmap:
rabbitmq.conf: |
## Clustering
#cluster_formation.peer_discovery_backend = k8s
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.k8s.address_type = hostname
cluster_partition_handling = autoheal
#cluster_formation.k8s.hostname_suffix = rabbitmq.${NAMESPACE}.svc.cluster.local
#cluster_formation.node_cleanup.interval = 10
#cluster_formation.node_cleanup.only_log_warning = true
rabbitmq-serviceaccount:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rabbitmq
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs:
- get
- list
- watch
rabbitmq-statefulset:
initContainers:
- name: "rabbitmq-config"
image: busybox
volumeMounts:
- name: rabbitmq-config
mountPath: /tmp/rabbitmq
- name: rabbitmq-config-rw
mountPath: /etc/rabbitmq
command:
- sh
- -c
# the newline is needed since the Docker image entrypoint scripts appends to the config file
- cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf && echo '' >> /etc/rabbitmq/rabbitmq.conf;
cp /tmp/rabbitmq/enabled_plugins /etc/rabbitmq/enabled_plugins;
containers:
- name: rabbitmq
image: rabbitmq
ports:
- containerPort: 15672
Any help?
There are multiple way to do it
You can use the RabbitMQ CLI to add the user into it.
Add the environment variables and change the username/password instead of guest .
image: rabbitmq:management-alpine
environment:
RABBITMQ_DEFAULT_USER: user
RABBITMQ_DEFAULT_PASS: password
Passing argument to image
https://www.rabbitmq.com/cli.html#passing-arguments
Mounting the configuration file to RabbitMQ volume.
Rabbitmq.conf file
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
loopback_users.guest = false
listeners.tcp.default = 5672
#default_pass = admin
#default_user = admin
hipe_compile = false
#management.listener.port = 15672
#management.listener.ssl = false
management.tcp.port = 15672
management.load_definitions = /etc/rabbitmq/definitions.json
#default_pass = admin
#default_user = admin
definitions.json
{
"users": [
{
"name": "user",
"password_hash": "password",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}
],
"vhosts":[
{"name":"/"}
],
"queues":[
{"name":"qwer","vhost":"/","durable":true,"auto_delete":false,"arguments":{}}
]
}
Another option
Dockerfile
FROM rabbitmq
# Define environment variables.
ENV RABBITMQ_USER user
ENV RABBITMQ_PASSWORD password
ADD init.sh /init.sh
EXPOSE 15672
# Define default command
CMD ["/init.sh"]
init.sh
#!/bin/sh
# Create Rabbitmq user
( sleep 5 ; \
rabbitmqctl add_user $RABBITMQ_USER $RABBITMQ_PASSWORD 2>/dev/null ; \
rabbitmqctl set_user_tags $RABBITMQ_USER administrator ; \
rabbitmqctl set_permissions -p / $RABBITMQ_USER ".*" ".*" ".*" ; \
echo "*** User '$RABBITMQ_USER' with password '$RABBITMQ_PASSWORD' completed. ***" ; \
echo "*** Log in the WebUI at port 15672 (example: http:/localhost:15672) ***") &
# $# is used to pass arguments to the rabbitmq-server command.
# For example if you use it like this: docker run -d rabbitmq arg1 arg2,
# it will be as you run in the container rabbitmq-server arg1 arg2
rabbitmq-server $#
You can read more here

Tekton - mount path workspace issue - Error of path

Currently, I am trying to deploy tutum-hello-world. I have written a script for the same, but it does not work as it is supposed to.
I am certain that this issue is related to workspace.
UPDATE
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
steps:
- name: tutum-deploy
image: bitnami/kubectl
script: |
kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/
Error -
root#master1:~/tekton-scripts# tkn taskrun logs tutum-deploy-run-8sq8s -f -n default
[tutum-deploy] + kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
[tutum-deploy] error: the path "/root/tekton-scripts/tutum-deploy.yaml" cannot be accessed: stat /root/tekton-scripts/tutum-deploy.yaml: permission denied
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-14T12:54:01.096Z","type":"InternalTektonResult"}]
PS - I have placed my script on the master node at - /root/tekton-scripts/tutum-deploy.yaml
root#master1:~/tekton-scripts# ls -l tutum-deploy.yaml
-rwxrwxrwx 1 root root 626 Jun 11 11:31 tutum-deploy.yaml
OLD SCRIPT
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
steps:
- name: tutum-deploy
image: bitnami/kubectl
command: ["kubectl"]
args:
- "apply"
- "-f"
- "./tutum-deploy.yaml"
Here is my code for tutum-deploy.yaml which is present on the machine (master node) of Kubernetes cluster with read, write and execute permissions -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-tutum
labels:
service: hello-world-tutum
spec:
replicas: 1
selector:
matchLabels:
service: hello-world-tutum
template:
metadata:
labels:
service: hello-world-tutum
spec:
containers:
- name: tutum-hello-world
image: tutum/hello-world:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-tutum
spec:
type: NodePort
selector:
service: hello-world-tutum
ports:
- name: "80"
port: 80
targetPort: 80
nodePort: 30050
I ran the following commands from my master node of Kubernetes cluster -
1. kubectl apply -f task-tutum-deploy.yaml
2. tkn task start tutum-deploy
Error -
Using tekton command - $ tkn taskrun logs tutum-deploy-run-tvlll -f -n default
task tutum-deploy has failed: "step-tutum-deploy" exited with code 1 (image: "docker-pullable://bitnami/kubectl#sha256:b83299ee1d8657ab30fb7b7925b42a12c613e37609d2b4493b4b27b057c21d0f"); for logs run: kubectl -n default logs tutum-deploy-run-tvlll-pod-vbl5g -c step-tutum-deploy
[tutum-deploy] error: the path "./tutum-deploy.yaml" does not exist
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-11T14:01:49.786Z","type":"InternalTektonResult"}]
The error is from this part of your YAML:
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
spec.workspaces.mountPath expects a directory, rather than a file, as you have specified here. You may mean /root/tekton-scripts/ instead but I am unfamiliar with tutum-hello-world.
If you look at the documentation you will see that all references to mountPath are directories rather than files.

Why is this rule preventing my GitLab stage from running?

In my .gitlab-ci.yml file I have this stage, which uses environment variables in artifacts from a previous stage:
build_dev_containers:
stage: build_dev_containers
variables:
CI_DEBUG_TRACE: "true"
script:
- whoami
…and it outputs the following debug information:
++ DEV_CONTAINERS=true
If I change it by adding the following rule, the stage no longer runs:
rules:
- if: '$DEV_CONTAINERS == "true"'
Any idea what I could be doing wrong?
Not sure if this information adds any value, but just in case:
My previous stage outputs a .env file in its artifacts, and it contains the value
DEV_CONTAINERS=true
Here is the complete file. The powershell script creates package.env in the root path:
image: microsoft/dotnet:latest
variables:
GIT_RUNNER_PATH: 'C:\GitLab'
SCRIPTS_PATH: '.\Lava-Tools\BuildAndDeploy\BuildServer'
stages:
- dev_deploy
- build_dev_containers
dev_deploy:
stage: dev_deploy
tags:
- lava
variables:
GIT_CLONE_PATH: '$GIT_RUNNER_PATH/builds/d/$CI_COMMIT_SHORT_SHA/$CI_PROJECT_NAME'
script:
- 'powershell -noprofile -noninteractive -executionpolicy Bypass -command ${SCRIPTS_PATH}\createdevdeployvars.ps1 -Branch "${CI_COMMIT_REF_NAME}" -ShortCommitHash "${CI_COMMIT_SHORT_SHA}"'
artifacts:
reports:
dotenv: package.env
build_dev_containers:
stage: build_dev_containers
image: docker.repo.ihsmarkit.com/octo/alpine/build/dotnet:latest
tags:
- lava-linux-containers
variables:
CI_DEBUG_TRACE: "true"
script:
- whoami
rules:
- if: '$DEV_CONTAINERS == "true"'
The rules are evaluated before the jobs begin, so a rule cannot evaluate the output from a job.
As a workaround I used if statements in my script: section:
build_dev_containers:
stage: build_dev_containers
image: docker.repo.ihsmarkit.com/octo/alpine/build/dotnet:latest
tags:
- lava-linux-containers
script:
- if [ "$DEV_CONTAINERS" == "true" ]; then echo "DEV_CONTAINERS is true - running"; else echo "DEV_CONTAINERS is not true - skipping"; exit 0; fi
- whoami
deploy_dev_containers:
stage: deploy_dev_containers
tags:
- lava
script:
- |
if ( "$DEV_CONTAINERS" -eq "true" ) {
Write-Output "DEV_CONTAINERS is true - running"
}
else {
Write-Output "DEV_CONTAINERS is not true - skipping"
exit 0
}
- ls

kafka connect transforms RegExRouter exiting with unrecoverable exception

I have made a kafka pipeline to copy a sqlserver table to s3
During sink, i'm trying to transform topic names dropping prefix with the regexrouter function :
"transforms":"dropPrefix",
"transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",
"transforms.dropPrefix.regex":"SQLSERVER-TEST-(.*)",
"transforms.dropPrefix.replacement":"$1"
The sink fails with the message :
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:188)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:564)
... 10 more
If i remove the transform, the pipeline works fine
Problem can be reproduced with this docker-compose :
version: '2'
services:
smtproblem-zookeeper:
image: zookeeper
container_name: smtproblem-zookeeper
ports:
- "2181:2181"
smtproblem-kafka:
image: confluentinc/cp-kafka:5.0.0
container_name: smtproblem-kafka
ports:
- "9092:9092"
links:
- smtproblem-zookeeper
- smtproblem-minio
environment:
KAFKA_ADVERTISED_HOST_NAME : localhost
KAFKA_ZOOKEEPER_CONNECT: smtproblem-zookeeper:2181/kafka
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://smtproblem-kafka:9092
KAFKA_CREATE_TOPICS: "_schemas:3:1:compact"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
smtproblem-schema_registry:
image: confluentinc/cp-schema-registry:5.0.0
container_name: smtproblem-schema-registry
ports:
- "8081:8081"
links:
- smtproblem-kafka
- smtproblem-zookeeper
environment:
SCHEMA_REGISTRY_HOST_NAME: http://smtproblem-schema_registry:8081
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://smtproblem-kafka:9092
SCHEMA_REGISTRY_GROUP_ID: schema_group
smtproblem-kafka-connect:
image: confluentinc/cp-kafka-connect:5.0.0
container_name: smtproblem-kafka-connect
command: bash -c "wget -P /usr/share/java/kafka-connect-jdbc http://central.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/6.4.0.jre8/mssql-jdbc-6.4.0.jre8.jar && /etc/confluent/docker/run"
ports:
- "8083:8083"
links:
- smtproblem-zookeeper
- smtproblem-kafka
- smtproblem-schema_registry
- smtproblem-minio
environment:
CONNECT_BOOTSTRAP_SERVERS: smtproblem-kafka:9092
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: "connect_group"
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 1000
CONNECT_CONFIG_STORAGE_TOPIC: "connect_config"
CONNECT_OFFSET_STORAGE_TOPIC: "connect_offsets"
CONNECT_STATUS_STORAGE_TOPIC: "connect_status"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: "io.confluent.connect.avro.AvroConverter"
CONNECT_VALUE_CONVERTER: "io.confluent.connect.avro.AvroConverter"
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: "http://smtproblem-schema_registry:8081"
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: "http://smtproblem-schema_registry:8081"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "smtproblem-kafka_connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
CONNECT_LOG4J_LOGGERS: org.reflections=ERROR
CONNECT_PLUGIN_PATH: "/usr/share/java"
AWS_ACCESS_KEY_ID: localKey
AWS_SECRET_ACCESS_KEY: localSecret
smtproblem-minio:
image: minio/minio:edge
container_name: smtproblem-minio
ports:
- "9000:9000"
entrypoint: sh
command: -c 'mkdir -p /data/datalake && minio server /data'
environment:
MINIO_ACCESS_KEY: localKey
MINIO_SECRET_KEY: localSecret
volumes:
- "./minioData:/data"
smtproblem-sqlserver:
image: microsoft/mssql-server-linux:2017-GA
container_name: smtproblem-sqlserver
environment:
ACCEPT_EULA: "Y"
SA_PASSWORD: "Azertyu&"
ports:
- "1433:1433"
Create a database in sqlserver container :
$ sudo docker exec -it smtproblem-sqlserver bash
# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Azertyu&'
Create a test database :
create database TEST
GO
use TEST
GO
CREATE TABLE TABLE_TEST (id INT, name NVARCHAR(50), quantity INT, cbMarq INT NOT NULL IDENTITY(1,1), cbModification smalldatetime DEFAULT (getdate()))
GO
INSERT INTO TABLE_TEST VALUES (1, 'banana', 150, 1); INSERT INTO TABLE_TEST VALUES (2, 'orange', 154, 2);
GO
exit
exit
Create a source connector :
curl -X PUT http://localhost:8083/connectors/sqlserver-TEST-source-bulk/config -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.password": "Azertyu&",
"validate.non.null": "false",
"tasks.max": "3",
"table.whitelist": "TABLE_TEST",
"mode": "bulk",
"topic.prefix": "SQLSERVER-TEST-",
"connection.user": "SA",
"connection.url": "jdbc:sqlserver://smtproblem-sqlserver:1433;database=TEST"
}'
Create the sink connector :
curl -X PUT http://localhost:8083/connectors/sqlserver-TEST-sink/config -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{
"topics": "SQLSERVER-TEST-TABLE_TEST",
"topics.dir": "TABLE_TEST",
"s3.part.size": 5242880,
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"tasks.max": 3,
"schema.compatibility": "NONE",
"s3.region": "us-east-1",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"s3.bucket.name": "datalake",
"store.url": "http://smtproblem-minio:9000",
"flush.size": 1,
"transforms":"dropPrefix",
"transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",
"transforms.dropPrefix.regex":"SQLSERVER-TEST-(.*)",
"transforms.dropPrefix.replacement":"$1"
}'
Error can be shown in Kafka connect UI, or with curl status command :
curl -X GET http://localhost:8083/connectors/sqlserver-TEST-sink/status
Thanks for your help
So, if we debug, we can see what it is trying to do...
There is a HashMap with the original topic name (SQLSERVER_TEST_TABLE_TEST-0), and the transform has already been applied (TABLE-TEST-0), so if we lookup the "new" topicname, it cannot find the S3 writer for the TopicPartition.
Therefore, the map returns null, and the subsequent .buffer(record) throws an NPE.
I had a similar use case for this before -- writing more than one topic into a single S3 path, and I ended up having to write a custom partitioner, e.g. class MyPartitioner extends DefaultPartitioner.
If you build a JAR using some custom code like that, put it under usr/share/java/kafka-connect-storage-common, then edit the connector config for partitioner.class, it should work as expected.
I'm not really sure if this is a "bug", per say, because back up the call stack, there is no way to get a reference to the regex transform at the time the topicPartitionWriters are declared with the source topic name(s).
If anything, the storage connector configurations should allow a separate regex transform that can edit the encodedPartition (the path where it writes the files)