Is it possible to get a list of available instance types in a specific availability zone for AWS RDS? - sql

I`m trying to modify RDS DB Instance launched in vpc by AWS API using ModifyDBInstance action. I`m not change instance type (instance launched with db.m1.small type and not canged), but I`m reciving message:
AWS Error. Request ModifyDBInstance failed. Cannot modify the instance class because there are no instances of the requested class available in the current instance's availability zone. Please try your request again at a later time. (RequestID: xxx).
According to AWS docs
To determine the instance classes that are available for a particular DB engine, use the DescribeOrderableDBInstanceOptions action. Note that not all instance classes are available in all regions for all DB engines.
So I have two quastions:
Is it possible to get by API only Instance types available in specific AZ? In DescribeOrderableDBInstanceOptions actions responce I have many instance types, which not available. I`m also checked responce of DescribeReservedDBInstancesOfferings action, and it`s doesn`t fit.
Why it possible to launch DBInstance with some instance type, but have troubles on trying to modify it DBInstance without changing instance type?
Any ideas?

It looks like one of the return values listed in this AWS RDS CLI call is AvailabilityZones
AvailabilityZones -> (list)
A list of Availability Zones for the orderable DB instance.
(structure)
Contains Availability Zone information.
This data type is used as an element in the following data type:
OrderableDBInstanceOption
Name -> (string)
The name of the availability zone.
Generally the CLI allows your to filter but it is not support for rds for some reason or another.
--filters (list)
This parameter is not currently supported.
The API returns the object OrderableDBInstanceOption which also has the AZ listed.
To answer #2 is that AWS does have capacity issues from time to time, like any other cloud or service provider, they are generally better at handling it than others. What AZ are you trying to use and size of instance? If you continue to have issues I would open a support ticket with AWS.

The easiest way is to select any one of the Rds instance you have in your infrastructure and click on Modify and there will be one option like dbInstanceTypes it is like drop down where you can find available instance types available in the particular region.

Related

Implementing a RMW operation in Redis

I would like to maintain comma separated lists of entries of the following form <ip>:<app> indexed by a an account ID. There would be one such list for each user indexed by their account ID with the number of users in the millions. This is mainly to track which server in a cluster a user using a certain application is connected to.
Since all servers are written in Java, with Redisson I'm currently doing:
RSet<String> set = client.getSet(accountKey);
and then I can modify the set using some typical Java container APIs supported by Redisson. I basically need three types of updates to these comma separated lists:
Client connects to a new application = append
Client reconnects with existing application to new endpoint = modify
Client disconnects = remove
A new connection would require a change to a field like:
1.1.1.1:foo,2.2.2.2:bar -> 1.1.1.1:foo,2.2.2.2:bar,3.3.3.3:baz
A reconnect would require an update like:
1.1.1.1:foo,2.2.2.2:bar -> 3.3.3.3:foo,2.2.2.2:bar
A disconnect would require an update like:
1.1.1.1:foo,2.2.2.2:bar -> 2.2.2.2:bar
As mentioned the fields would be keyed by the account ID of the user.
My question is the following: Without using Redisson how can I implement this "directly" on top of Redis commands? The goal is to allow rewriting certain components in a language different than Java. The cluster handles close to a million requests per second.
I'm actually quite curious how Redisson implements an RSet under the hood and I haven't had time to dig into it. I guess one option would be to use Lua, but I've never used it with Redis. Any ideas how to efficiently implement these operations on top of Redis on a manner that is easily supported by multiple languages, i.e. not relying on a specific library?
Having actually thought about the problem properly, it can be solved directly with a HSET. Where <app> is the field name and the value are the IPs. Keys being user accounts.

Nifi API - Update parameter context

We've created a parameter context within Nifi which is allocated to several process groups. We would like to update the value of one parameter within the parameter context. Is there any option to do this via the API?
NiFi CLI from nifi-toolkit has commands for interacting with parameters, there is one for set-param:
https://github.com/apache/nifi/tree/master/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/params
You could use that, or look at the code to see how it uses the API.
Also, anything you can do from NiFi UI has go through the REST API. So you can always open Chrome Dev Tools, take some action in the UI like updating a parameter, and then look at which calls are made.

Transferring a string from AWS Lambda to an EC2 instance that is started in the same lambda function

I am completely new to working with AWS. Currently I am in the following situation: My lambda function starts an EC2 instance. This instance will need the information contained in the 'ID' variable. I was wondering how I could transfer this data from my lambda function to the EC2 instance. Is this even possible?
import boto3
region = 'eu-west-1'
instances = ['AnEC2Instance-ID']
ec2 = boto3.client('ec2', region_name=region)
import os
def lambda_handler(event, context):
ID = event.get('ID')
ec2.start_instances(InstanceIds=instances)
print('started your instance: ' + str(instances))
Here 'AnEC2Instance-ID' is supposed to be an EC2 instance ID.
This lambda function is triggered by a gateway API. The ID is obtained from this Gatway API using the line: ID = event.get('ID')
These EC2 instances have already been launched and in this lambda are being started via boto3 ec2.start_instances. Prior to this you would have to do some clever AWS stuff to modify the instance's user-data and also have the instance configured to re-run the user-data at start (not just launch). Quite complex IMHO.
Two alternate suggestions:
Revisit your need to start an existing EC2 instance, as you can easily pass data to a new instance with boto3 in the client.run_instances function.
Or if you truly need to revive an existing EC2 instance, you might need a third component to manage the correlation of EC2 instance IDs and your Event IDs: how about DynamoDB? First your script above writes a key-value pair of the InstanceID and Event ID. Then invoke ec2.start_instances and when the EC2 instance starts it is pre-configured to do curl http://169.254.169.254/latest/meta-data/instance-id, and uses that value to query the DynamoDB?
When launch an Amazon EC2 instance, you can provide data in the User Data parameter.
This data will then be accessible on the instance via:
http://169.254.169.254/latest/user-data/
This technique is also used to pass a startup script to an instance. There is software provided on the standard Amazon AMIs that will run the script if it starts with specific identifiers. However, you can simply pass any data via User Data to make it available to the instance.

Boto3 put object results in botocore.exceptions.ReadTimeoutError

I have setup two ceph clusters with a rados gateway on a node for each of them.
What I'm trying to achieve is to transfer all objects from a bucket "A" with an endpoint in my cluster "1" to a bucket "B" which can be reached from another endpoint on my cluster "2". It doesn't really matter for my issue but at least you understand the context.
I created a script in python using the boto3 module.
The script is really simple. I just wanted to put an object in a bucket.
The relevant part is as written below :
s3 = boto3.resource('s3',
endpoint_url=credentials['endpoint_url'],
aws_access_key_id=credentials['access_key'],
aws_secret_access_key=credentials['secret_key'],
use_ssl=False)
s3.Object('my-bucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))
(hello.txt just contains a word)
Let's say this script is written and runs from a node (which is the radosgw endpoint node) in my cluster 1. It works well when the "endpoint_url" is the node I'm running the script from but it does not work when I'm trying to reach my other endpoint (the radosgw, located in another node within my cluster "2").
I got this error :
botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL
The weird thing is that I can create a bucket without any error :
s3_src.create_bucket(Bucket=bucket_name)
s3_dest.create_bucket(Bucket=bucket_name)
I can even list the buckets of my two endpoints.
Do you have any idea why I can do pretty much everything but not put a single object in my second endpoint ?
I hope it makes any sense.
Ultimately, I found that the issue was not related with boto but with my ceph pool which countains my data.
The bucket pool was healthy, that's why I could create my buckets whereas the data pool was unhealthy, hence the issue when I tried to put an object in a bucket.

Reducing Parse Server to only Parse Cloud

I'm currently using a self hosted Parse Server up to date but I'm facing some security issues.
At the moment, calls done to the route /classes can retrieve any object in any table and, even though I might want an object to be public readable, I wouldn't like to show all the parameters of that object. Briefly I don't want the database to be retrieved in any case, I would like to disable "everything" except the Parse Cloud code. So that is, I would be able to run calls to my own functions, but not able to use clients (Android, iOS, C#, Javascript...) to retrieve data.
Is there any way to do this? I've been searching deeply for this, trying to debug some Controllers but I don't have any clue.
Thank you very much in advance.
tl;dr: set the ACL for all objects to be only readable when using the master key and then tell the query in Cloud Code to use the MK when querying your data
So without changing Parse Server itself you could make use of ACL and only allow a specific user to access objects. You would then "login" as that user in your Cloud Code and be able to access all objects.
As the old method, Parse.Cloud.useMasterKey() isn't available in the OS Parse Server you will have to pass the parameter useMasterKey to the query you are running which should do the trick for this particular request and will bypass ACL/CLP. There is an example in the Wiki of Parse Server as well.
For convenience, here is a short code example from the Wiki:
Parse.Cloud.define('getTotalMessageCount', function(request, response) {
var query = new Parse.Query('Messages');
query.count({
useMasterKey: true
}) // count() will use the master key to bypass ACLs
.then(function(count) {
response.success(count);
});
});