With SESSION and LOCAL below, I cannot set a parameter persistently in PostgreSQL:
SET SESSION log_statement = 'all'
SET LOCAL log_statement = 'all'
Actually with PERSIST and GLOBAL below in MySQL, I can set a parameter persistently and semi-persistently respectively:
SET PERSIST transaction_isolation = 'READ-UNCOMMITTED';
SET GLOBAL transaction_isolation = 'READ-COMMITTED';
So, are there any ways to set a parameter persistently by a query in PostgreSQL?
You need to use ALTER SYSTEM if you want to change it globally for all databases and users. Or change it in postgresql.conf.
Note that you need to reload the configuration if you do that (depending on the parameter you might even need to restart Postgres completely - this is documented for each property)
If you only want to change it for a specific database, use ALTER DATABASE
If you only want to change it for a user, use ALTER USER
Related
SQL Server 2017 Enterprise Query Store is showing no data at all but shows READ_ONLY as the actual mode
The one similar question in this forum has an answer that doesn't apply - none of the exclusions are present.
I ran:
GO
ALTER DATABASE [MyDB] SET QUERY_STORE (OPERATION_MODE = READ_ONLY, INTERVAL_LENGTH_MINUTES = 5, QUERY_CAPTURE_MODE = AUTO)
GO
I also ran all these, having referenced the link below, DB context is MyDB:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store?view=sql-server-2017
ALTER DATABASE MyDB SET QUERY_STORE = ON;
SELECT actual_state_desc, desired_state_desc, current_storage_size_mb,
max_storage_size_mb, readonly_reason, interval_length_minutes,
stale_query_threshold_days, size_based_cleanup_mode_desc,
query_capture_mode_desc
FROM sys.database_query_store_options;
ALTER DATABASE MyDB SET QUERY_STORE CLEAR;
-- Run together...
ALTER DATABASE MyDB SET QUERY_STORE = OFF;
GO
EXEC sp_query_store_consistency_check
GO
ALTER DATABASE MyDB SET QUERY_STORE = ON;
GO
No issues found. The SELECT returns matching Actual and Desired states.
I am a sysadmin role member, who actually sets up all 30+ production servers, and this is the only miscreant.
The server is under heavy load and I need internal-eyes on it, in addition to Solarwinds DPA. I've also run sp_blitzquerystore but it returns an empty rowset from the top query, and just the two priority 255 rows from the 2nd.
What on earth did I do wrong? Any clues, anyone, please?
I know this is an old post but for those who come here looking for answers: I do see you ran the query with OPERATION_MODE = READ_ONLY. This would put it into a read-only mode - a mode in which it only reads what is stored in the query store without collecting any additional information. There will be no information shown if the query store has never been in READ_WRITE mode.
If it has been in READ_WRITE mode before and you are still not seeing anything, it is possible that the heavy load on the server is pushing query plans out of the cache.
I understand what these settings do and why they're important. However, I have a few questions:
If I run this once before a CREATE/ALTER:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
Is it necessary to run it again the next time I do another CREATE/ALTER?
Basically, are these SET commands making changes that persist across batches/connections?
Any SET options specified when an object is created (table or otherwise) are tied to that object in perpetuity, and those settings matter. For instance, to add a filtered index to a table, it must have been created with a whole host of correctly specified SET options. You can see all those options by looking in sys.sql_modules.
However once the object is created, those options cannot be changed (or at least not by any means I'm aware of), so adding SET options to an ALTER TABLE script would apply to any SQL you were to run (as usual) but would not update the SET options associated with that table.
Could anyone help me to update search path dynamically in PostgreSQL without change in conf files? because I don't want to restart my Postgres service once my application is up.
Use the SET command to issue, for example:
SET search_path TO myschema,public;
Alternatively you can use
ALTER ROLE your_db_user SET search_path TO ....;
so you won't have to execute the SET on each connection.
I am using EF code first and sql server to create a database via
public static void Create(DbConnection connection)
{
using (var context = new EmptyContext(connection))
((IObjectContextAdapter)context).ObjectContext.CreateDatabase();
}
By default, this seems to create a database with Auto_Close option set to TRUE. Note that the Model db has Auto_Close set to FALSE by default.
Therefore, it appears that EF is setting the value of this option to TRUE.
Do anyone know how to override this behavior at db creation?
I would need to do this as the db is being created, rather than changing the db options after creation.
Thanks!
Well it seems that AUTO_CLOSE is set to TRUE by default in sql express and localdb. This is baked into the CREATE db sql function, and therefore cannot be changed until after the db has been created.
http://blogs.msdn.com/b/sqlexpress/archive/2008/02/22/sql-express-behaviors-idle-time-resources-usage-auto-close-and-user-instances.aspx
Thus, the only way to set AUTO_CLOSE to false is either to run an ALTER DATABASE command, or to create the db directly in sql server, in which case the AUTO_CLOSE function will mirror the model db's value (which is FALSE by default).
Does SET IDENTITY_INSERT [Table] ON persist beyond the scope of a SQL Script? I'm wondering if I need to explicitly set it to "OFF" or if SQL Server knows that it should only use that setting for the current script.
Thanks!
Yes, it does persist beyond the current batch.
It doesn't, however, persist beyond the current session, so if you disconnect immediately after running it, you don't need to change it.
As an aside, it may only be ON for one table at a time, per session, and as pointed out by Aaron in a comment below, will throw an error if you try setting it for more than one table per session.