We have a Windows Azure Federated database, which we need to turn into a normal database (due to Federations being retired shortly).
Having read through copious amounts of documentation and tried various things, the answer seems to be the ALTER FEDERATION ... SWITCH OUT AT command:-
https://msdn.microsoft.com/library/dn269988.aspx
Removes all federation metadata and constraints from the federation member database. After execution, the federation member is a standalone database.
The format for the command is given as:-
ALTER FEDERATION federation_name SWITCH OUT AT ([LOW | HIGH] distribution_name = boundary_value)
LOW or HIGH determines the federation member that will be switched out on the respective side of the given federation boundary_value. The boundary value must correspond to an existing partition value, range-high or range-low, in the existing federation.
and there is a specific example to switch out the Federation with a boundary of 99:-
ALTER FEDERATION CustomerFederation SWITCH OUT AT (LOW cid = 100)
So, taking all of the above information, I queried the Federation values, which returned the following:-
SELECT * FROM sys.federations
federation_id : 65536
name : CustomerFederation
SELECT * FROM sys.federation_members
federation_id : 65536
member_id : 65536
SELECT * FROM sys.federation_distributions
federation_id : 65536
distribution_name : cid
distribution_type : RANGE
system_type_id : 127
max_length : 8
precision : 19
scale : 0
collation_name : NULL
user_type_id : 127
boundary_value_in_high : 1
SELECT * FROM sys.federation_member_distributions
federation_id : 65536
member_id : 65536
distribution_name : cid
range_low : -9223372036854775808
range_high : NULL
However, no matter what value I try to use for boundary_value, I get the following:-
Msg 45026, Level 16, State 1, Line 1
ALTER FEDERATION SWITCH operation failed. Specified boundary value does not exist for federation distribution cid and federation CustomerFederation.
I've tried using the range_low value:-
ALTER FEDERATION CustomerFederation SWITCH OUT AT (LOW cid = -9223372036854775808)
ALTER FEDERATION CustomerFederation SWITCH OUT AT (HIGH cid = -9223372036854775808)
I've also tried one either side of that value, as the example used 100 to SWITCH OUT 99
I've tried using 0, as that's the value I use to connect to the Federation, but that gives the same error, as does -1 and 1, for both LOW and HIGH.
I've also tried specifying to use the Federation Root before running the command:-
USE FEDERATION ROOT WITH RESET
GO
ALTER FEDERATION CustomerFederation SWITCH OUT AT (LOW cid = -9223372036854775808)
I have tried running it from the main database and from the Federation.
Has anyone successfully used the ALTER FEDERATION ... SWITCH OUT AT command and can point me in the right direction please?
After hunting around some more, I found a link to a Federation Migration Utiluty:
https://code.msdn.microsoft.com/vstudio/Federations-Migration-ce61e9c1
Looking over the code, it appeared that the correct command was one I'd already tried:
ALTER FEDERATION CustomerFederation SWITCH OUT AT (HIGH cid = -9223372036854775808)
This time it worked. Not sure why it didn't the first time, possibly something else I'd tried before it had thrown it out.
Related
In Azure Device Provisioning Service
when using a custom allocation policy,
with '--reprovision-policy reprovisionandmigratedata'
is it possible to migrate the device twin data when the changing hubs and change some of the values in the twin?
From experiments initialTwin is ignored when moving between hubs (as opposed to registered for the first time) which is not that unexpected.
Example
Let's say that device d1 is provisioned to hub1 and its desired is
"desired" : {
"a": 1
}
Some time later d1 reprovisions and the allocation function is executed and it will move the device to hub2. I need the new desired to be:
"desired" : {
"a": 2
}
I have confirmed that when "re-provision and migrate data" the initialTwin will be ignored and that is by design.
An alternative is to query the latest twin from the device as part of custom allocation, add the new property and send it to DPS while choosing "re-provision and reset to initial config".
I have a table in which during the performance runs, there are inserts happening in the beginning when the job starts, during the insertion time there are also parallel operations(GET/UPDATE queries) happening on that table. The Get operation also updates a value in column marking that record as picked. However, the next get performed on the table would again return back the same record even when the record was marked in progress.
P.S. --> both the operations are done by the same single thread existing in the system. Logs below for reference, record marked in progress at Line 1 on 20:36:42,864, however, it is returned back in the result set of query executed after 20:36:42,891 by the same thread.
We also observed that during high load (usually during same scenario as mentioned above) some update operation (intermittent) were not happening on the table even when the update executed successfully (validated using the returned result and then doing a get just after that to check the updated value ) without throwing an exception.
13 Apr 2020 20:36:42,864 [SHT-4083-initial] FINEST - AbstractCacheHelper.markContactInProgress:2321 - Action state after mark in progresss contactId.ATTR=: 514409 for jobId : 4083 is actionState : 128
13 Apr 2020 20:36:42,891 [SHT-4083-initial] FINEST - CacheAdvListMgmtHelper.getNextContactToProcess:347 - Query : select priority, contact_id, action_state, pim_contact_store_id, action_id
, retry_session_id, attempt_type, zone_id, action_pos from pim_4083 where handler_id = ? and attempt_type != ? and next_attempt_after <= ? and action_state = ? and exclude_flag = ? order
by attempt_type desc, priority desc, next_attempt_after asc,contact_id asc limit 1
This happens usually during the performance runs when there are parallel JOB's started which are working on Ignite. Can anyone suggest what can be done to avoid such a situation..?
We have 2 ignite data nodes that are deployed as springBootService deployed in the cluster being accessed, by 3 client nodes.
Ignite version -> 2.7.6, Cache configuration is as follows,
IgniteConfiguration cfg = new IgniteConfiguration();
CacheConfiguration cachecfg = new CacheConfiguration(CACHE_NAME);
cachecfg.setRebalanceThrottle(100);
cachecfg.setBackups(1);
cachecfg.setCacheMode(CacheMode.REPLICATED);
cachecfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
cachecfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cachecfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
// Defining and creating a new cache to be used by Ignite Spring Data repository.
CacheConfiguration ccfg = new CacheConfiguration(CACHE_TEMPLATE);
ccfg.setStatisticsEnabled(true);
ccfg.setCacheMode(CacheMode.REPLICATED);
ccfg.setBackups(1);
DataStorageConfiguration dsCfg = new DataStorageConfiguration();
dsCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
dsCfg.setStoragePath(storagePath);
dsCfg.setWalMode(WALMode.FSYNC);
dsCfg.setWalPath(walStoragePath);
dsCfg.setWalArchivePath(archiveWalStoragePath);
dsCfg.setWriteThrottlingEnabled(true);
cfg.setAuthenticationEnabled(true);
dsCfg.getDefaultDataRegionConfiguration()
.setInitialSize(Long.parseLong(cacheInitialMemSize) * 1024 * 1024);
dsCfg.getDefaultDataRegionConfiguration().setMaxSize(Long.parseLong(cacheMaxMemSize) * 1024 * 1024);
cfg.setDataStorageConfiguration(dsCfg);
cfg.setClientConnectorConfiguration(clientCfg);
// Run the command to alter the default user credentials
// ALTER USER "ignite" WITH PASSWORD 'new_passwd'
cfg.setCacheConfiguration(cachecfg);
cfg.setFailureDetectionTimeout(Long.parseLong(cacheFailureTimeout));
ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
ccfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
ccfg.setRebalanceThrottle(100);
int pool = cfg.getSystemThreadPoolSize();
cfg.setRebalanceThreadPoolSize(2);
cfg.setLifecycleBeans(new MyLifecycleBean());
logger.info(methodName, "Starting ignite service");
ignite = Ignition.start(cfg);
ignite.cluster().active(true);
// Get all server nodes that are already up and running.
Collection<ClusterNode> nodes = ignite.cluster().forServers().nodes();
// Set the baseline topology that is represented by these nodes.
ignite.cluster().setBaselineTopology(nodes);
ignite.addCacheConfiguration(ccfg);
I'm using Redis on a clustered db (locally). I'm trying the MULTI command, but it seems that it is not working. Individual commands work and I can see how the shard moves.
Is there anything else I should be doing to make MULTI work? The documentation is unclear about whether or not it should work. https://redis.io/topics/cluster-spec
In the example below I just set individual keys (note how the port=cluster changes), then trying a multi command. The command executes before EXEC is called
127.0.0.1:30001> set a 1
-> Redirected to slot [15495] located at 127.0.0.1:30003
OK
127.0.0.1:30003> set b 2
-> Redirected to slot [3300] located at 127.0.0.1:30001
OK
127.0.0.1:30001> MULTI
OK
127.0.0.1:30001> HSET c f val
-> Redirected to slot [7365] located at 127.0.0.1:30002
(integer) 1
127.0.0.1:30002> HSET c f2 val2
(integer) 1
127.0.0.1:30002> EXEC
(error) ERR EXEC without MULTI
127.0.0.1:30002> HGET c f
"val"
127.0.0.1:30002>
MULTI transactions, as well as any multi-key operations, are supported only within a single hashslot in a clustered Redis deployment.
I am trying to write a table into my data warehouse using the RPostgreSQL package
library(DBI)
library(RPostgreSQL)
pano = dbConnect(dbDriver("PostgreSQL"),
host = 'db.panoply.io',
port = '5439',
user = panoply_user,
password = panoply_pw,
dbname = mydb)
RPostgreSQL::dbWriteTable(pano, "mtcars", mtcars[1:5, ])
I am getting this error:
Error in postgresqlpqExec(new.con, sql4) :
RS-DBI driver: (could not Retrieve the result : ERROR: syntax error at or near "STDIN"
LINE 1: ..."hp","drat","wt","qsec","vs","am","gear","carb" ) FROM STDIN
^
)
The above code writes into Panoply as a 0 row, 0 byte table. Columns seem to be properly entered into Panoply but nothing else appears.
Fiest and most important redshift <> postgresql.
Redshift does not use the Postgres bulk loader. (so stdin is NOT allowed).
There are many options available which you should choose depending on your need, especially consider the volume of data.
For high volume of data you should write to s3 first and then use redshift copy command.
There are many options take a look at
https://github.com/sicarul/redshiftTools
for low volume see
inserting multiple records at once into Redshift with R
In the dashboard I see there are currently 22 open connections to the DB instance, blocking new connections with the error:
remaining connection slots are reserved for non-replication superuser connections.
I'm accessing the DB from web service API running on EC2 instance and always keep the best practise of:
Connection connection = DriverManager.getConnection(URL, USER_NAME, PASSWORD);
Class.forName(DB_CLASS);
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery(SQL_Query_String);
...
resultSet.close();
statement.close();
connection.close();
Can I do something else in the code?
Should I do something else in the DB management?
Is there a way to periodically close connections?
Amazon has to set the number of connections based on each model's right to demand a certain amount of memory and connections
MODEL max_connections innodb_buffer_pool_size
--------- --------------- -----------------------
t1.micro 34 326107136 ( 311M)
m1-small 125 1179648000 ( 1125M, 1.097G)
m1-large 623 5882511360 ( 5610M, 5.479G)
m1-xlarge 1263 11922309120 (11370M, 11.103G)
m2-xlarge 1441 13605273600 (12975M, 12.671G)
m2-2xlarge 2900 27367833600 (26100M, 25.488G)
m2-4xlarge 5816 54892953600 (52350M, 51.123G)
But if you want you can change the max connection size to custom value by
From RDS Console > Parameter Groups > Edit Parameters,
You can change the value of the max_connections parameter to a custom value.
For closing the connections periodically you can setup a cron job some thing like this.
select pg_terminate_backend(procpid)
from pg_stat_activity
where usename = 'yourusername'
and current_query = '<IDLE>'
and query_start < current_timestamp - interval '5 minutes';
I'm using Amazon RDS, SCALA, Postgresql & Slick. First of all - number of available connections in RDS depends on the amount of available RAM - i.e. size of the RDS instance. It's best not to change the default conn number.
You can check the max connection number by executing the following SQL statement on your RDS DB instance:
show max_connections;
Check your SPRING configuration to see how many threads you're spawning:
database {
dataSourceClass = org.postgresql.ds.PGSimpleDataSource
properties = {
url = "jdbc:postgresql://test.cb1111.us-east-2.rds.amazonaws.com:6666/dbtest"
user = "youruser"
password = "yourpass"
}
numThreads = 90
}
All of the connections ARE made upon SRING BOOT initialization so beware not to cross the RDS limit. That includes other services that connect to the DB. In this case the number of connections will be 90+.
The current limit for db.t2.small is 198 (4GB of RAM)
You can change in the parameter group idle_in_transaction_session_timeout to remove idle connections.
idle_in_transaction_session_timeout (integer)
Terminate any session with an open transaction that has been idle for
longer than the specified duration in milliseconds. This allows any
locks held by that session to be released and the connection slot to
be reused; it also allows tuples visible only to this transaction to
be vacuumed. See Section 24.1 for more details about this.
The default value of 0 disables this feature.
The current value in AWS RDS is 86400000 which when converted to hours (86400000/1000/60/60) is 24 hours.
you can change the max connections in the Parameters Group for your RDS instance. Try to increase it.
Or you can try to upgrade your instance, as the max connexions is set to {DBInstanceClassMemory/31457280}
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html