Engine_version: Redis versions must match <major>.x when using version 6 or higher, or <major>.<minor>.<bug-fix> - redis

I have the following elasticache resource:
resource "aws_elasticache_subnet_group" "main" {
name = "${var.identifier}-sng"
subnet_ids = var.subnet_ids
}
resource "aws_elasticache_cluster" "main" {
cluster_id = var.identifier
engine = "redis"
node_type = var.node_type
num_cache_nodes = var.nodes_count
parameter_group_name = var.parameter_group_name
engine_version = var.engine_version
port = 6379
security_group_ids = var.security_group_ids
subnet_group_name = aws_elasticache_subnet_group.main.name
tags = {
"redis" = "Auto managed by TF"
}
}
I run with aws elasticache Redis 6.0.5 and my var.engine_version is set with 6.0.5 too. It worked quite well until I've upgraded from terraform 1.3 to 1.4 I received the following error:
engine_version: Redis versions must match <major>.x when using version 6 or higher,
or <major>.<minor>.<bug-fix>
Is there anyone experiencing this issue after upgrading? what would be a solution to work around this problem?

Just ran into this problem and I was able to fix by setting parameter_group_name family to 6.x and engine_version to 6.0. When I set the engine version to 6.0.5 it threw the error you listed above. The 6.0 engine version defaults to 6.0.5

I was using elasticache redis 6.2.6 and 7.0.4 for 2 different projects.
To make it work I had to set the engine_versions 6.2 and 7.0 respectively.

Related

how do i upgrade eks default nodegroup version using cdk?

Please note this question is a different question than this one. The custom eks node group is defined by me and exposes configurations for me to modify. This question is specifically regarding the default node group, which lack any props or exposed configuration for me to modify.
I would like to upgrade the default node group that was created by CDK when I provisioned the EKS cluster.
this.cluster = new eks.Cluster(this, 'eks-cluster', {
vpc: props.vpc,
clusterName: props.clusterName,
version: eks.KubernetesVersion.V1_22,
albController: {
version: eks.AlbControllerVersion.V2_4_1,
},
defaultCapacity: 5,
});
However, I do not see any options to modify the version for the default node group. I have already bump the cluster version to v1.22 as well as my custom node groups, but the default node group still uses v1.21.
How can I upgrade the default node group version using CDK?
From my experiments and observations, there is nothing special with the default node group that EKS creates, this.cluster.defaultNodegroup. It is just a typical EKS node group with auto-scaling and no taints. I created my own default node group and used a updated AMI release version.
I disabled the defaultCapacity and set it to 0, and then created my own custom node group with same configurations as the one EKS created by default.
this.cluster = new eks.Cluster(this, 'eks-cluster', {
vpc: props.vpc,
clusterName: props.clusterName,
version: eks.KubernetesVersion.V1_23,
kubectlLayer: new KubectlV23Layer(this, 'kubectl'),
albController: {
version: eks.AlbControllerVersion.V2_4_1,
},
// set this to 0 to disable default node group created by EKS
defaultCapacity: 0,
});
const defaultNodeGroup = new eks.Nodegroup(this, 'default-node-group', {
cluster: this.cluster,
releaseVersion: '<updated AMI release version>',
nodegroupName: 'eks-default-nodegroup',
});

Can't get the subnet id of azure using python

I want to get the resource id of a subnet in a virtual network in azure using python, the command i have used is this line : subnets=network_client.subnets.get(resource_group,'XXX','XXX')
But what I get is an error: HttpResponseError: (InvalidApiVersionParameter) The api-version '2021-02-01' is invalid. The supported versions are '2021-04-01,2021-01-01,2020-10-01,2020-09-01,2020-08-01,2020-07-01,2020-06-01,2020-05-01,2020-01-01,2019-11-01,2019-10-01,2019-09-01,2019-08-01,2019-07-01,2019-06-01,2019-05-10,2019-05-01,2019-03-01,2018-11-01,2018-09-01,2018-08-01,2018-07-01,2018-06-01,2018-05-01,2018-02-01,2018-01-01,2017-12-01,2017-08-01,2017-06-01,2017-05-10,2017-05-01,2017-03-01,2016-09-01,2016-07-01,2016-06-01,2016-02-01,2015-11-01,2015-01-01,2014-04-01-preview,2014-04-01,2014-01-01,2013-03-01,2014-02-26,2014-04'.
I have tried different api versions but it's getting me errors.Any idea please ?
The version of azure-mgmt-network I used is 19.0.0
Please make sure that you have the below two models installed first before executing the script:
pip install azure-mgmt-network
pip install azure-identity
Then use the below script to get the subnet-id of specific subnet present in your subscription:
from azure.identity import AzureCliCredential
from azure.mgmt.network import NetworkManagementClient
credential = AzureCliCredential()
subscription_id = "948d4068-xxxx-xxxx-xxxx-e00a844e059b"
network_client = NetworkManagementClient(credential, subscription_id)
resource_group_name = "ansumantest"
location = "West US 2"
virtual_network_name = "ansuman-vnet"
subnet_name = "acisubnet"
Subnet=network_client.subnets.get(resource_group_name, virtual_network_name, subnet_name)
print(Subnet.id)
Output:
Note : I am using pip version pip 21.2.4 and (python 3.9). The pip models version that I am using are as below :
I am using the same network model version as you . But if you are still facing the issue then trying installing the new one i.e. 19.1.0 .

Parsoid: Unexpected Token error and failing to initialize

mwApis:
- # This is the only required parameter,
# the URL of you MediaWiki API endpoint.
uri: 'http://spgenerations.com/wiki/api.php'
On my linux box, I can curl this URL and receive the api data.
Regardless of using the apt-get installation or developer installation (ngm install) both instances give me this error:
{"name":"parsoid","hostname":"play.projecttidal.com.KVM","pid":12636,"level":30,"levelPath":"info/service-runner","msg":"master(12636) initializing 2 workers","time":"2019-03-12T03:55:47.504Z","v":0}
{"name":"parsoid","hostname":"play.projecttidal.com.KVM","pid":12645,"level":60,"moduleName":"lib/index.js","levelPath":"fatal/service-runner/worker","msg":"Unexpected token ...","time":"2019-03-12T03:55:47.917Z","v":0}
{"name":"parsoid","hostname":"play.projecttidal.com.KVM","pid":12636,"level":40,"message":"first worker died during startup, continue startup","worker_pid":12645,"exit_code":1,"startup_attempt":1,"levelPath":"warn/service-runner/master","msg":"first worker died during startup, continue startup","time":"2019-03-12T03:55:48.925Z","v":0}
For context, the hostname here is incorrect and the domain has been removed.
This is my parsoid config:
// Parsoid configuration
$wgVirtualRestConfig['modules']['parsoid'] = array(
'url' => 'server.spgenerations.com',
'forwardCookies' => true
);
I have tried everything under the hidden voodoo sun to get this thing to work and I'm beyond frustrated. 4 hours spent tinkering with URL links to no avail, so please, if you know anything relating to this error, lend a hand.
Check what Node.JS version you are running with:
nodejs --version
If it is 4.x: That's too old for Parsoid. I had the same situation (Debian 9, still such an old Node.JS version in the repositories..). After upgrading to 10.x it ran fine for me.
I used the following guide (see Install using a PPA) to update to a newer Node.JS release: https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-debian-9

How to configure applications in ami 4

The documentation says
In Amazon EMR releases 4.0 and greater, the only accepted parameter is
the application name. To pass arguments to applications, you supply a
configuration for each application.
But I cannot find an example that shows how to pass arguments in ami 4. All I can find are examples configuring exports such as below. I am trying to figure out how to set the version of Spark to use.
[
{
"Classification":"hadoop-env",
"Properties":{
},
"Configurations":[
{
"Classification":"export",
"Properties":{
"HADOOP_USER_CLASSPATH_FIRST":"true",
"HADOOP_CLASSPATH":"/path/to/my.jar"
}
}
]
}
]
You cannot set an arbitrary version of Spark to use like you could with 3.x AMI versions. Rather, the version of Spark (and other apps, of course) is determined by the release label. For example, the latest release is currently emr-5.2.1, which includes Spark 2.0.2. If you want a 1.x version of Spark, the latest version available is Spark 1.6.3 on release emr-4.8.3.

Getting com.sap.spark.vora.VoraConfigurationException with "discovery" parameter

I've got HDP 2.3.4 cluster on SLES 11 SP3 with 3 machines and installed Vora 1.2
Finally got Discovery service to work. I can verify it in http://myclustermachine:8500/ui/#/dc1/services. Also, Vora Thriftserver doesn't die.
So I can get through the line "val vc = new SapSQLContext(sc)" on the page 34 of the Vora Installation Guide. But when I try to create a table, I get the following:
com.sap.spark.vora.VoraConfigurationException: Following parameter(s) are invalid: discovery
at com.sap.spark.vora.config.ParametersValidator$.checkSyntax(ParametersValidator.scala:280)
at com.sap.spark.vora.config.ParametersValidator$.apply(ParametersValidator.scala:98)
at com.sap.spark.vora.DefaultSource.createRelation(DefaultSource.scala:108)
at org.apache.spark.sql.execution.datasources.CreateTableUsingTemporaryAwareCommand.resolveDataSource(CreateTableUsingTemporaryAwareCommand.scala:59)
at org.apache.spark.sql.execution.datasources.CreateTableUsingTemporaryAwareCommand.run(CreateTableUsingTemporaryAwareCommand.scala:29)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:69)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:140)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:138)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:138)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:933)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:933)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:129)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
What may be wrong this time?
Apparently it was a line in spark-defaults.conf that I added for discovery parameter: "spark.vora.discovery xxxxxxx:8500"
After I removed it, the whole thing works.