I know that usually when clicking Ctrl+Shift+R will refresh intellisense.
Also, via Edit - Intellisense - Refresh Local cache....
However, I have a script which I run when ever I update my database objects to keep them all in sync. One part of the script is an update.....
IF EXISTS (SELECT 1 FROM MyTable WHERE MyCol IS NULL)
BEGIN
UPDATE MyTable
SET MyCol = 1
WHERE MyCol IS NULL;
END
GO
Now, previously MyCol was called something else. But now it is called MyCol, I can see it in the Object explorer.
I have the red squiggle under MyCol so I refresh the intellisense cache as per above. However, no change.
If I copy the update into a new query edit window, no red squiggle.
The update runs OK and I can run the complete script with no issues. It is just bugging me as to why the squggle won't disapear for this particular script.
I've restarted SSMS, restarted my machine......
What else can I try?
OK - I now realise.
It's because my script creates the table (if not there already) in it's original state and then alters columns etc accordingly.
It seems intellisense it quite clever, but not quite clever enough.
I deduced this by trying the Update statement at the top of the script.
One for the memory bank.
Related
As the title says - when I perform an "INSERT" statement, I can't see the results unless I re-open PL/SQL Developer.
To make things a bit more clear:
After I perform this statement on the empty table "worker_temp" -
insert into worker_temp
select * from worker_b
I see that 100 records have been inserted:
But when I try to see the results using this query:
select * from worker_temp;
I still see an empty table:
But only after I quit PL/SQL Developer and re-open it, I can see the records that I inserted earlier:
Is there a way to see the changes without closing and re-opening PL/SQL Developer?
What I've tried so far:
I've tried to refresh the table using right click on it:
And I've also tried to refresh the whole tables folder:
I also tried committing -
commit;
But I'm not sure what that even is.
Tool agnostic way:
begin
insert into worker_temp
select * from worker_b;
commit;
end;
Judging by all the screenshots you are likely getting separate database sessions in 'each' tab you are using - which is a good thing. You have to issue the commit on the same session that performed the insert. Another way of understanding this:
begin
insert into worker_temp select * from worker_b;
DBMS_OUTPUT.PUT_LINE('Rows inserted but not committed ' || SQL%ROWCOUNT);
-- 'undo' the insert by rolling back the insert instead of commit.
rollback;
end;
The default setting in PL/SQL Developer is Multi session:
This means that each editor window you have open is logged into the database in a separate session. A session can't see another session's changes until it commits. This is rather like saving a shared Excel spreadsheet on a network drive. Nobody can see your changes until you have finished making them, which you'll appreciate is an important feature in a multi-user database.
In PL/SQL Developer, the Multi session default setting means that you can start a long-running query in one SQL window, and then get on with something else in another without being blocked and having to wait for it. With this setting, you'll need to commit your changes before any other editor window can see them. There are Commit and Rollback icons in the toolbar, or you can type commit; and execute it.
However, I always set mine to Dual session, meaning all windows are part of the same session, even if it means I sometimes have to wait for something. I find this simplifies things considerably, and also I can make changes across multiple windows without needing to commit, which can be helpful when working with global temporary tables or alter session commands.
Read more in this setup guide.
relatively new to this.
I have a stored procedure that should run when a job is triggered. It all worked fine with the files containing test data that I used for import and testing (excel sheets). I got a new file to test with my solution before deploying, but after having executed the job given the new file the stored procedure just keeps loading without having anything done.
I tried to delete the (excel)file and start again, but it says it's open in another program (it isn't). I then noticed that anytime I try to perform a simple select on one of the tables that are used in the stored procedure, it just keeps loading and never finishes. None of the simple commands work.
I've tried this:
SELECT * FROM Cleaned_Rebate
SELECT TOP 1 * FROM Cleaned_Rebate
TRUNCATE TABLE Cleaned_Rebate
DELETE FROM Cleaned_Rebate
SELECT COUNT(*) FROM Cleaned_Rebate
I also tried to create a new stored procedure identical to the original one, but it just never executes the create query. it keeps on loading. It only creates the new one if I remove 90% of the code.
I can't even execute the stored procedure for the sake of saving it (f5) with just an added comment...
I don't know what is wrong, why, or what I should do to fix this. Does anyone have an idea of what could be wrong? any advice is appreciated!
I want to add that these aren't any big tables - some of them should be empty even. the data sets aren't large either (about 100-300 rows?)
Anytime I'm editing and debugging a SQL Server stored procedure, I'll make the changes, then refresh all along the line. I'll refresh folders: Stored Procedures, Programmability, the DB itself, all the way up to the server.
Regardless of how far up I refresh, when I click on the changed procedure, and no, I don't need to post examples of 'which' stored procedure or the code, it does this every time on any changes regardless of type.
But, when I right click on the stored procedure and say, 'Execute Stored Procedure', it always runs on the 'unchanged' code.
It takes 3 or 4 clicks on the 'Debug' button and a check of the code before the 'changed' code suddenly appears.
There seems no rhyme or reason for when the app refreshes with the edited changes, usually 3 or 4 reruns of 'Debug'.
This isn't a huge issue, just a very time consuming one.
Does anyone know how to make the dang thing refresh the 1st time? Without having to re-debug over and over and check the content for changes each time before I know they 'took'?
I try to use the option with recompile in the SP, and some time works,
ALTER PROCEDURE [dbo].[xx] (
#pC_IDLEGAJO char(9),
....
#P_MensajeError VarChar(500) output
)
WITH RECOMPILE
But, finally a drop it
IF exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[NombreStored]')
and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[NombreStored];
Create procedure NombreStored ....
I have a TSQL script that is used to set up a database as part of my product's installation. It takes a number of steps which all together take five minutes or so. Sometimes this script fails on the last step because the user running the script does not have sufficient rights to the database. In this case I would like the script to fail strait away. To do this I want the script to test what rights it has up front. Can anyone point me at a general purpose way of testing if the script is running with a particular security permission?
Edit: In the particular case I am looking at it is trying to do a backup, but I have had other things go wrong and was hoping for a general purpose solution.
select * from fn_my_permissions(NULL, 'SERVER')
This gives you a list of permissions the current session has on the server
select * from fn_my_permissions(NULL, 'DATABASE')
This gives you a list of permissions for the current session on the current database.
See here for more information.
I assume it is failing on an update or insert after a long series of selects.
Just try a simple update or insert inside a transaction. Hard-code the row id, or whatever to make it simple and fast.
Don't commit the transaction--instead roll it back.
If you don't have rights to do the insert or update, this should fail. If you DO, it will roll back and not cause a permanent change.
try the last insert/update up front with some where condition like
insert/update
where 1=2
if (##error <> 0)
raise error 6666 'no permissions'
this would not cause any harm but would raise a flag upfront about the lack of rights.
I am in charge of a database.
It has around 126 sprocs, some 20 views, some UDFs. There are some tables that saves fixed configuration data for our various applications.
I have been using a one-big text file that contained IF EXIST ...DELETE GO CREATE PROCEDURE... for all the sprocs, udfs, views and all the insert/updates for the configuration scripts.
In the course of time, new sprocs were added, or existing sprocs were changed.
The biggest mistake (as far as I am aware of) I have made in creating this BIG single text file is to use the code for new/changed sprocs at the beginning of the text file. I, however, I forgot to exclude the previous code for the new/changed sprocs. Lets illustrate this:
Say my BIG script (version 1) contains script to create sprocs
sp 1
sp 2
sp 3
view 1
view 2
The databse's version table gets updated with the version 1.
Now there is some change in sp 2. So the version 2 of the BIG script is now:
sp2 --> (newly added)
sp1
sp2
sp3
view 1
view 2
So, obviously running the BIG script version 2 will not going to update my sp 2.
I am kind of late of realise this with 100+ numbers of sprocs.
Remedial Action:
I have created a folder structure. One subfolder for each sproc/view.
I have gone through the latest version of the BIG script from the bgeinning and placed the code for all scripts into respective folders. Some scripts are repeated more than once in the BIG script. If there are more than on block of code for creating a specific sproc I am putting the earlier version into another subfolder called "old" within the folder for that sproc. Luckily I have always documented all the changes I made to all sprocs/view etc - I write down the date, a version number and description of changes made as comment in the sproc's code. This has helped me a lot to figure out the the latest version of code for a sprocs when there are more than one block of code for the sproc.
I have created a DOS batch process to concatenate all the individual scripts to create my BIG script. I have tried using .net streamreader/writer which messes up with the encoding and the "£" sign. So I am sticking to DOS batch for the time being.
Is there any way I can improve the whole process?
At the moment I am after some way to document the versioning of the BIG script along with its individual sproc versions. For example, I like to have some way to document
Big Script (version 1) contains
sp 1 version 1
sp 2 version 1
sp 3 version 3
view 1 version 1
view version 1
Big script (version 2) has
sp 1 version 1
sp 2 version 2
sp 3 version 3
view 1 version 1
view 2 version 1
Any feedback is welcomed.
Have you looked at Visual Studio Team System Database Edition (now folded into Developer Edition)?
One of things it will do is allow to maintain the SQL to build the whole database, and then apply only the changes to update the target to the new state. I believe that it will also, given a reference database, create a script to bring a database matching the reference schema up to the current model (e.g. to deploy to production without developers having access to production).
The way we do it is to have separate files for tables, stored procedures, views etc and store them in their own directories as well. For execution we just have a script which executes all the files. Its definitely a lot easier to read than having one huge file.
To update each table for example, we use this template:
if not exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[MyTable]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
begin
CREATE TABLE [dbo].[MyTable](
[ID] [int] NOT NULL ,
[Name] [varchar](255) NULL
) ON [PRIMARY]
end else begin
-- MyTable.Name
IF (SELECT COL_LENGTH('MyTable','Name')) IS NULL BEGIN
ALTER TABLE MyTable ADD [Name] [varchar](255) NULL
PRINT 'MyTable.Name CREATED.'
END
--etc
end
When I had to handle a handful of SQL tables, procedures and triggers I did the following :
All files under version control (CVS at that time but look at SVN or Bazaar for example)
One file per object named after the object
a makefile stating the dependencies between files
It was an oracle project and every time you change a table you have to recomple its triggers.
And my trigges used several modules so had to be recompiled also when their dependent modules were updated ...
The makefile avoid the "big file" approach : you don't have to execute ALL your code for every change.
Under windows you can download "NMAKE.exe" to use makefiles
HTH
Please see my answer to a similar question, which may help:
Database schema updates
Some additional points:
When we make a Release, e.g. for Version 2, we concatenate together all the Sprocs from that have a modified date more recent than the previous Release.
We are careful to add at least one blank line to the bottom of each Sproc script, and to start each Sproc script with a comment - otherwise concatenation can yield "GOCREATE NextSproc" - which is a bore!
When we run the concatenated script we sometimes find that we get conflicts - e.g. calling sub-Sprocs that don't already exist. We duplicate the code for such Sprocs at the bottom of the script - so they are recreated a second time - to ensure that SQL Server's dependency table is correct. (i.e. we sort this out at the QA stage for the Release)
Also, we put a GRANT permissions statement at the bottom of each Sproc script, so that when we Drop / Create an SProc we re-Grant the permissions. However, if your Permissions are allocated to each user, or are differently assigned on each server, then it may be better to use ALTER rather than CREATE - but that is a problem if the SProc does not already exist, so then it is best to do:
IF NOT EXIST ...
CREATE PROCEDURE MySproc AS SELECT 'STUB'
GRANT EXECUTE Permissions
and then that Stub is immediately replaced by the real ALTER Sproc statement.