I have a question.
I have now configured the topology using mininet. Here I want to limit the flow table size of the switch.
Is there a way to limit the flow table size of the switch???
Or can OpenVSwitch limit it?
Thank you.
Yes, you can instruct Open vSwitch to limit the size of a flow table, either by refusing new flows, or by evicting old flows. From the ovs-vsctl documentation:
Make flow table 0 on bridge br0 refuse to accept more than 100 flows:
ovs-vsctl -- --id=#ft create Flow_Table flow_limit=100 overflow_policy=refuse -- set Bridge br0 flow_tables=0=#ft
Make flow table 0 on bridge br0 evict flows, with fairness based on the
matched ingress port, when there are more than 100:
ovs-vsctl -- --id=#ft create Flow_Table flow_limit=100 overflow_policy=evict groups='"NXM_OF_IN_PORT[]"' -- set Bridge br0 flow_tables:0=#ft
Related
In a master-slave scenario, Redis replication is made in an asynchronous way. But is it guaranteed that the commands are replicated in order? If I have these commands:
SET key1 111
SET key2 222
SET key3 333
If the slave node has "key2", then I can say for sure that it'll also have "key1"?
Yes, commands are replicated in order. Anything else wouldn't actually be replication.
As described in the documentation, both the master and the replica keep track of an offset indicating where they are in the stream of commands. That allows the replica to know if it receives a command out of order and not process it prematurely.
We can skip a error in GTID based replication by following steps:
STOP SLAVE;
set GTID_NEXT='SERVER_UUID:LAST_TRANSACTION_NUMBER+1' ;
BEGIN; COMMIT; SET GTID_NEXT="AUTOMATIC";
START SLAVE;
But if a replication is running with channel information, than how to skip the transaction for a particular channel ?
We can give "for channel" keyword in stop slave and start slave. But how to skip transaction for a particular channel, like in set GTID_NEXT command or what ?
In a replication topology GTID is a global unique identified for any transaction, therefore if transaction is required to skip, specifying channel becomes irrelevant here.
It is similar to MySQL Replication Filters [MySQL 5.7] are global , or in other words, will be applied for all running replication channels
I tried connecting to the database server using the command:
psql -h host_ip -d db_name -U user_name --password
It displays the following line and refuses to connect.
psql: FATAL: too many connections for role "user_name".
How to close the active connections?
I do not have admin rights for the database. I am just an ordinary user.
From inside any DB of the cluster:
Catch 22: you need to be connected to a database first. Maybe you can connect as another user? (By default, some connections are reserved for superusers with the superuser_reserved_connections setting.)
To get detailed information for each connection by this user:
SELECT *
FROM pg_stat_activity
WHERE usename = 'user_name';
As the same user or as superuser you can cancel all (other) connections of a user:
SELECT pg_cancel_backend(pid) -- (SIGINT)
-- pg_terminate_backend(pid) -- the less patient alternative (SIGTERM)
FROM pg_stat_activity
WHERE usename = 'user_name'
AND pid <> pg_backend_pid();
Better be sure it's ok to do so. You don't want to terminate important queries (or connections) that way.
pg_cancel_backend() and pg_terminate_backend() in the manual.
From a Linux shell
Did you start those other connections yourself? Maybe a hanging script of yours? You should be able to kill those (if you are sure it's ok to do so).
You can investigate with ps which processes might be at fault:
ps -aux
ps -aux | grep psql
If you identify a process to kill (better be sure, you do not want to kill the server):
kill 123457689 # pid of process here.
Or with SIGKILL instead of SIGTERM:
kill -9 123457689
I'm pretty new to pgAdmin, but so far I have not utilized the command line. I had the same issue and I found the easiest way to resolve the issue in my case was to simply delete the processes listed in "Database Activity" in the Dashboard.
(just click the X on the left side of the PID)
It's a bit tedious since you must delete each process individually, but doing so should free up your available connections.
Hope this is useful.
You need to connect on your postgresql db and run command:
ALTER ROLE your_username CONNECTION LIMIT -1;
Check pool_size this is probably too much or to small set value on local psql settings. You should at first check with pool_size = 10 (like default). This should fix errors of too_many_connections.
Please check how many connections are allowed on that user and you can just kill other connections for that user and log in again. But it's better to just increase the connection limit for that user.
This issue mostly arises on PgAdmin. It seems like after all these years still this issue persists.
This will drop existing connections except for yours:
Query pg_stat_activity and get the pid values you want to kill, then issue SELECT pg_terminate_backend(pid int) to them.
PostgreSQL 9.2 and above:
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'TARGET_DB' -- ← change this to your DB
AND pid <> pg_backend_pid();
PostgreSQL 9.1 and below:
SELECT pg_terminate_backend(pg_stat_activity.procpid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'TARGET_DB' -- ← change this to your DB
AND procpid <> pg_backend_pid();
from https://stackoverflow.com/a/5408501/13813241 (duplicate)
I was getting this for a specific incident.
Django Development. I had a shell open to query django models. I also have the dev server running. I was using elephantsql for testing/prototyping. So it threw error. Once I exited out of django manage.py shell, it started working.
I'm getting nothing but ActiveRecord::TransactionIsolationConflict errors when I try to update records for one of my models. Retrying does not help.
What should I do?
Rails 3.2.13
Ruby 1.9.3
Option A: Hunt down locks on table
In my case, there were a number of orphaned processes which held locks on the table in question. Somehow, the application that initiated them dropped the connections, but the locks remained. The following instructions are drawn from Unlocking tables if thread is lost
Check locks:
mysql> show open tables where in_use > 0;
If you have no idea of what session or process locks the table(s), view the list of processes, and identify likely candidates by their username or the database which they're accessing:
mysql> show processlist;
Kill processes which you know or suspect to have locks on the table:
mysql> kill <process id>;
Option B: Increase timeout
A popular suggestion is to increase the innodb_lock_wait_timeout. The following instructions are drawn from How to debug Lock wait timeout exceeded on MySQL?
Check your timeout:
mysql> show variables like 'innodb_lock_wait_timeout';
Change timeout dynamically (not persistent):
mysql> SET GLOBAL innodb_lock_wait_timeout = 120;
Change timeout in config file (persistent):
[mysqld]
innodb_lock_wait_timeout=120
I have two databases on the same instance.
One called ICMS, one called CarePay_DEV1
When a change happens in ICMS (Source), it needs to send a message to CarePay_Dev1 (Destination).
I am new to Broker Services, and am trying to make a message go to the queue. Once that works, I will hopefully get the data into a table in the destination, and that will then be processed by .Net code. But I just want something to appear in the destination first.
So, step 1: I enable the service on the two databases
-- Enable Broker on CarePay
ALTER DATABASE CarePay_Dev1
SET ENABLE_BROKER;
-- Enable Broker on Source
ALTER DATABASE ICMS_TRN
SET ENABLE_BROKER;
Step 2: Create the message type on the source and destination.
-- Create Message Type on Receiver:
USE CarePay_DEV1
GO
CREATE MESSAGE TYPE [IcmsCarePayMessage]
VALIDATION=WELL_FORMED_XML;
-- Create Message Type on Sender:
USE ICMS_TRN
GO
CREATE MESSAGE TYPE [IcmsCarePayMessage]
VALIDATION=WELL_FORMED_XML;
I then create the Contacts, on both databases:
-- Create Message Type on Receiver:
USE CarePay_DEV1
GO
CREATE MESSAGE TYPE [IcmsCarePayMessage]
VALIDATION=WELL_FORMED_XML;
-- Create Message Type on Sender:
USE ICMS_TRN
GO
CREATE MESSAGE TYPE [IcmsCarePayMessage]
VALIDATION=WELL_FORMED_XML;
I then create the message queues on both databases:
-- CREATE Sending Messagw Queue
USE ICMS_TRN
GO
CREATE QUEUE CarePayQueue
-- CREATE Receiving Messagw Queue
USE CarePay_Dev1
GO
CREATE QUEUE CarePayQueue
And finally, I create the services on both databases:
-- Create the message services
USE ICMS_TRN
GO
CREATE SERVICE [CarePayService]
ON QUEUE CarePayQueue
USE CarePay_DEV1
GO
CREATE SERVICE [CarePayService]
ON QUEUE CarePayQueue
Now, the queues should be ready, so then I try and send something from the source to the destination:
-- SEND THE MESSAGE!
USE ICMS_TRN
GO
DECLARE #InitDlgHandle UNIQUEIDENTIFIER
DECLARE #RequestMessage VARCHAR(1000)
BEGIN TRAN
BEGIN DIALOG #InitDlgHandle
FROM SERVICE [CarePayService]
TO SERVICE 'CarePayService'
ON CONTRACT [IcmsCarePayContract]
SELECT #RequestMessage = N'<Message>The eagle has landed!</Message>';
SEND ON CONVERSATION #InitDlgHandle
MESSAGE TYPE [IcmsCarePayMessage] (#RequestMessage)
COMMIT TRAN
I get:
Command(s) completed successfully.
But then when I try select from the destination queue, it's empty.
/****** Script for SelectTopNRows command from SSMS ******/
SELECT TOP 1000 *, casted_message_body =
CASE message_type_name WHEN 'X'
THEN CAST(message_body AS NVARCHAR(MAX))
ELSE message_body
END
FROM [CarePay_DEV1].[dbo].[CarePayQueue] WITH(NOLOCK)
Can anyone spot the issue? I can't see where I tell the destination which database to send the message to - which could be part of the issue?
I highly recommend you read Adam Machanic's Service Broker Advanced Basics Workbench, specifically the section entitled "Routing and Cross-Database Messaging".
In addition, for future troubleshooting you may want to use SSBDiagnose or also read through Remus Rusanu's numerous articles on the topic
I think the initiator service sent a message to yourself. Try to change the name of destination (terget) service.