I am trying to install rabbitmq:8.6.1 from bitnami chart repository using terraform:0.12.18.
My helm version is 3.4.2
while installing I am getting following error
Error: validation: chart.metadata is required
My terraform file is as below
resource "kubernetes_secret" "rabbitmq_load_definition" {
metadata {
name = "rabbitmq-load-definition"
namespace = kubernetes_namespace.kylas_sales.metadata[0].name
}
type = "Opaque"
data = {
"load_definition.json" = jsonencode({
"users": [
{
name: "sales",
tags: "administrator",
password: var.rabbitmq_password
}
],
"vhosts": [
{
name: "/"
}
],
"permissions": [
{
user: "sales",
vhost: "/",
configure: ".*",
write: ".*",
read: ".*"
}
],
"exchanges": [
{
name: "ex.iam",
vhost: "/",
type: "topic",
durable: true,
auto_delete: false,
internal: false,
arguments: {}
}
]
})
}
}
resource "helm_release" "rabbitmq" {
chart = "rabbitmq"
name = "rabbitmq"
version = "8.6.1"
timeout = 600
repository = "https://charts.bitnami.com/bitnami"
namespace = "sales"
depends_on = [
kubernetes_secret.rabbitmq_load_definition
]
}
After looking issue(509) at terraform-provider-helm,
If your module/subdirectory name is same as your chart name (In my case directory name is rabbitmq and my helm_resource name is also same rabbitmq), so I am getting this error, still not able to identify why, With reference to,
Solution: I change my directory name from rabbitmq to rabbitmq-resource and this error is gone.
Related
I'm using kinesis delivery stream to send stream, from event bridge to s3 bucket. But i can't seem to find which class have the option to configure dynamic partitioning?
this is my code for delivery stream:
new CfnDeliveryStream(this, `Export-delivery-stream`, {
s3DestinationConfiguration: {
bucketArn: bucket.bucketArn,
roleArn: kinesisFirehoseRole.roleArn,
prefix: `test/!{timestamp:yyyy/MM/dd}/`
}
});
I have been working on the same issue for a few days, and have finally gotten something to work. Here is an example of how it can be implemented in CDK. In short, the partitioning has to be enables as you have done, but you need to set the key and .jq expression in the so-called processingConfiguration.
Our incomming json data looks something like this:
{
"data":
{
"timestamp":1633521266990,
"defaultTopic":"Topic",
"data":
{
"OUT1":"Inactive",
"Current_mA":3.92
}
}
}
The CDK code looks as following:
const DeliveryStream = new CfnDeliveryStream(this, 'deliverystream', {
deliveryStreamName: 'deliverystream',
extendedS3DestinationConfiguration: {
cloudWatchLoggingOptions: {
enabled: true,
},
bucketArn: Bucket.bucketArn,
roleArn: deliveryStreamRole.roleArn,
prefix: 'defaultTopic=!{partitionKeyFromQuery:defaultTopic}/!{timestamp:yyyy/MM/dd}/',
errorOutputPrefix: 'error/!{firehose:error-output-type}/',
bufferingHints: {
intervalInSeconds: 60,
},
dynamicPartitioningConfiguration: {
enabled: true,
},
processingConfiguration: {
enabled: true,
processors: [
{
type: 'MetadataExtraction',
parameters: [
{
parameterName: 'MetadataExtractionQuery',
parameterValue: '{defaultTopic: .data.defaultTopic}',
},
{
parameterName: 'JsonParsingEngine',
parameterValue: 'JQ-1.6',
},
],
},
{
type: 'AppendDelimiterToRecord',
parameters: [
{
parameterName: 'Delimiter',
parameterValue: '\\n',
},
],
},
],
},
},
})
I try to use Azure Resource Manager and bicep to deploy an IoT Hub and a storage account. IoT Hub has the feature to store all messages in an storage account for archiving purpose. The IoT Hub should access the storage account with an User-assigned Managed Identity.
I would like to deploy all these things in a single ARM deployment written in bicep. The problem is deploying the IoT Hub with a User-assigned Identity and setting up the archive custom route. I get the error:
{
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.",
"details": [
{
"code": "400140",
"message": "endpointName:messageArchive, exceptionMessage:Invalid operation: Managed identity is not enabled for IotHub ... errorcode: IH400140."
}
]
}
My bicep file looks like this
resource messageArchive 'Microsoft.Storage/storageAccounts#2021-04-01' = {
name: 'messagearchive4631'
location: resourceGroup().location
kind: 'StorageV2'
sku: {
name: 'Standard_GRS'
}
properties: {
accessTier: 'Hot'
supportsHttpsTrafficOnly: true
}
}
resource messageArchiveBlobService 'Microsoft.Storage/storageAccounts/blobServices#2021-04-01' = {
name: 'default'
parent: messageArchive
resource messageArchiveContainer 'containers#2021-02-01' = {
name: 'iot-test-4631-container'
properties: {
publicAccess: 'None'
}
}
}
resource iotIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' = {
name: 'iot-test-access-archive-4631'
location: resourceGroup().location
}
resource iotAccesToStorage 'Microsoft.Authorization/roleAssignments#2020-08-01-preview' = {
name: guid(extensionResourceId(messageArchive.id, messageArchive.type, 'iot-test-access-archive-4631'))
scope: messageArchive
properties: {
roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/ba92f5b4-2d11-453d-a403-e96b0029c9fe'
principalId: iotIdentity.properties.principalId
description: 'Allow acces for IoT Hub'
}
}
resource iothub 'Microsoft.Devices/IotHubs#2021-03-31' = {
name: 'iot-test-4631'
location: resourceGroup().location
sku: {
name: 'B1'
capacity: 1
}
identity: {
type: 'UserAssigned'
userAssignedIdentities:{
'${iotIdentity.id}': {}
}
}
dependsOn:[
iotAccesToStorage
]
properties: {
features: 'None'
eventHubEndpoints: {
events: {
retentionTimeInDays: 1
partitionCount: 4
}
}
routing: {
endpoints: {
storageContainers: [
{
name: 'messageArchive'
endpointUri: 'https://messagearchive4631.blob.core.windows.net/'
containerName: 'iot-test-4631-container'
batchFrequencyInSeconds: 100
maxChunkSizeInBytes: 104857600
encoding: 'Avro'
fileNameFormat: '{iothub}/{YYYY}/{MM}/{DD}/{HH}/{mm}_{partition}.avro'
authenticationType: 'identityBased'
}
]
}
routes: [
{
name: 'EventHub'
source: 'DeviceMessages'
endpointNames: [
'events'
]
isEnabled: true
}
{
name: 'messageArchiveRoute'
source: 'DeviceMessages'
endpointNames: [
'messageArchive'
]
isEnabled: true
}
]
fallbackRoute: {
source: 'DeviceMessages'
endpointNames: [
'events'
]
isEnabled: true
}
}
}
}
I tried removing the message routing block in IoT Hub
endpoints: {
storageContainers: [
{
name: 'messageArchive'
endpointUri: 'https://messagearchive4631.blob.core.windows.net/'
containerName: 'iot-test-4631-container'
batchFrequencyInSeconds: 100
maxChunkSizeInBytes: 104857600
encoding: 'Avro'
fileNameFormat: '{iothub}/{YYYY}/{MM}/{DD}/{HH}/{mm}_{partition}.avro'
authenticationType: 'identityBased'
}
]
}
and deploy it one time. This deployment works. If I then include the message routing block and deploy it again, then it works as expected.
Is it possible to do this in a single deployment?
I figured it out by myself. I'm using a user-assigned Managed Identity and therfore I was missing this in IoT Hub endpoint storage container configuration:
authenticationType: 'identityBased'
identity: {
userAssignedIdentity: iotIdentity.id
}
The complete IoT Hub endpoint configuration looks like this
endpoints: {
storageContainers: [
{
name: 'RawDataStore'
endpointUri: 'https://${nameRawDataStore}.blob.${environment().suffixes.storage}/'
containerName: nameIotHub
batchFrequencyInSeconds: 100
maxChunkSizeInBytes: 104857600
encoding: 'Avro'
fileNameFormat: '{iothub}/{YYYY}/{MM}/{DD}/{HH}/{mm}_{partition}.avro'
authenticationType: 'identityBased'
identity: {
userAssignedIdentity: iotIdentity.id
}
}
]
}
Cron usually worked in the strapi-3.0.0-beta.20 version
but It doesn't work after migrating to version strapi-3.0.0
Strapi-3.0.0-beta.20
./config/environments/{env}/server.json
{
"host": "0.0.0.0",
"port": 1337,
"proxy": {
"enabled": false
},
"cron": {
"enabled": true
},
"admin": {
"autoOpen": false
}
}
Strapi-3.0.0
./config/server.js
module.exports = ({ env }) => ({
host: '0.0.0.0',
port: env.int('PORT', '1337'),
production: env('NODE_ENV') === 'production' ? true : false,
proxy: {
enabled: false
},
cron: {
enabled: true
},
admin: {
autoOpen: false
}
})
This is strapi code that uses the server.js
strapi/packages/strapi/lib/middlewares/cron/index.js
if (strapi.config.get('server.cron.enabled', false) === true) {
_.forEach(_.keys(strapi.config.get('functions.cron', {})), task => {
cron.scheduleJob(task, strapi.config.functions.cron[task]);
});
This is the content registered in the github issue.
Describe the bug
Incorrect information in documentation for new configuration loader
Expected behavior
There is a possibility of misunderstanding in the document regarding the cron setting.
This is a setting to activate cron (3.0.0.beta.20)
./config/environments/{env}/server.json
{
"host": "0.0.0.0",
"port": 1337,
"cron": {
"enabled": true
}
}
The documentation on how to migrate guides like this.
Migrating
Server
Your server configuration can move from ./config/environments/{env}/server.json to
./config/server.js like shown here.
Server
Available options -> cron
However, to enable cron in version 3.0.0 must do it in the middleware.js
./config/middleware.js
timeout: 100,
load: {
before: ['responseTime', 'logger', 'cors', 'responses', 'gzip'],
order: [
"Define the middlewares' load order by putting their name in this array is the right order"
],
after: ['parser', 'router']
},
settings: {
...
cron: { enabled: true }
...
}
Code snippets
After checking the code (strapi/middlewares/index.js), I learned that it should be set in middleware.
System
- Node.js version: v12.14.0
- NPM version: 6.13.6
- Strapi version: 3.0.0
- Database: mongodb
- Operating system: window, linux
I'm using good plugin for my app, and I have copied the config parameters straight from the sample in the page, console and file are working , but good-http is not working!
here is my configuration:
myHTTPReporter: [
{
module: 'good-squeeze',
name: 'Squeeze',
args: [{error: '*', log: 'error', request: 'error'}]
},
{
module: 'good-http',
args: [
'http://localhost/log-request/index.php?test=23424', // <- test file I created to see if data is being sent or not
{
wreck: {
headers: {'x-api-key': 12345}
}
}
]
}
]
the only argument that is actually working is the ops: * and none of the error: *, ... is working
Am I missing something in my config parameters?
Maybe you should change the threshold parameter of the plugin, is set by default to 20 so it only send data after 20 events. If you want immediate results yo have to change it to 0.
myHTTPReporter: [
{
module: 'good-squeeze',
name: 'Squeeze',
args: [{error: '*', log: 'error', request: 'error'}]
},
{
module: 'good-http',
args: [
'http://localhost/log-request/index.php?test=23424', // <- test file I created to see if data is being sent or not
{
threshold: 0,
wreck: {
headers: {'x-api-key': 12345}
}
}
]
}
]
I use this stack:
On each front server
rails
logstasher gem (formats rails log in json)
logstash-forwarder (just forwards logs to logstash on central server)
On log server:
logstash (to centralize and index logs)
kibana to display
Kibana to works well with JSON format. But "message" data is provided by a string, not as a json (cf the provieded snippet). Is there a way to fix this?
For example, access the status is a bit tricky
Here's a message sample
{
_index: logstash-2014.09.18
_type: rails
_id: RHJgU2L_SoOKS79pBzU_mA
_version: 1
_score: null
_source: {
message: "{"#source":"unknown","#tags":["request"],"#fields":{"method":"GET","path":"/foo/bar","format":"html","controller":"items","action":"show","status":200,"duration":377.52,"view":355.67,"db":7.47,"ip":"123.456.789.123","route":"items#show","request_id":"021ad750600ab99758062de60102da8f"},"#timestamp":"2014-09-18T09:07:31.822782+00:00"}"
#version: 1
#timestamp: 2014-09-18T09:08:21.990Z
type: rails
file: /home/user/path/logstash_production.log
host: webserver.example.com
offset: 23200721
format: json_event
}
sort: [
rails
]
}
Thank you for help ;).
EDIT 1: Add logstash configuration files:
/etc/logstash/conf.d/01-lumberjack-input.conf
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
codec => "json"
}
}
/etc/logstash/conf.d/10-syslog.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
/etc/logstash/conf.d/30-lumberjack-output.conf
output {
elasticsearch { host => localhost }
# stdout { codec => rubydebug }
}
if useful, logstash-forwarder configuration:
/etc/logstash-forwarder on web servers
{
"network": {
"servers": [ "123.465.789.123:5000" ],
"timeout": 45,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/messages",
"/var/log/secure"
],
"fields": { "type": "syslog" }
},
{
"paths": [
"/home/xnxx/gportal/shared/log/logstash_production.log"
],
"fields": { "type": "rails", "format": "json_event" }
}
]
}
My config files are mainly inspired from this tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04
I've never personally used the lumberjack input, but it looks like it should support the codec=>json, so I'm not sure why it isn't. You could try putting this in instead (in /etc/logstash/conf.d/01-lumberjack-input.conf):
filter {
json {
source => 'message'
remove_field => [ 'message' ]
}
}
The final way was to stop and start logstash, otherwise(restart) the configuration seems not to be updated.
So instead of :
sudo service logstash restart
I did:
sudo service logstash stop
wait for ~1 minute, then
sudo service logstash start
Don't really understand the reason (init script do this, but doesn't wait 1 minutes), but it worked for me.