Can't recreate deleted SSAS database - ssas

I didn't like the DatabaseID of my SSAS database, so I decide to delete/create it.
I generated a create script, and then a delete script. I changed the ID in the create script to the one I want.
I ran the delete script. It ran successfully. Refreshed and verified the database has been deleted.
Now when I run the create script, I get:
"Either the user [MyUserName] does not have access to the [MyDatabaseName] database, or the database does not exist."
Well, no &*^% it doesn't exist, I'm trying to create it.
Googling hasn't yielded any results so far. Any ideas? Do I need to do some additional clean-up somewhere before I can recreate the database?

The create script is still "pointing at" the old cube. Close that query window and create a new one.

Related

Wait for CockroachDB Command to Finish

I currently have a .sql file with the likes of:
DROP VIEW IF EXISTS vw_example;
CREATE VIEW vw_example as
SELECT a FROM b;
When running this command as part of a flyway migration, if the view already exists, it fails, as if the create command is not waiting for the DROP IF EXISTS to finish.
I know SQL server has a GO type keyword. Is there a way to sort of tell cockroachdb to wait for the first command?
As per the issue mentioned in the link, it is better to have the drop and create scripts in different migration files as flyway runs each migration in a single transaction.

Redgate migration scripts not running on deployment

I've been reading through the Redgate documentation on migration scripts and I'm trying to add a new column to a table that has a foreign key from another table.
Here's what I have done:
Added the new column, made it null-able and created the
relationship to a new table then I've committed the changes.
I then add static data to the new table so that the migration can
run. I commit this static data.I then add a blank migration script,
and set all null values on the column I've created in the last
commit to be the Id of one of the records in the related table. I
then commit this change.
I then run a deployment of both commits to my testing environment where
records already exist.
The problem I'm having is that the column gets created but the script seems like its not running as the column values stay null. I've verified that the script should actually change the columns as I've attempted to run it manually and it executes successfully.
Am I doing something wrong when using these scripts? Thanks.
I was creating blank migration scripts which lead to SQL Compare to set the column as not null. You have to specifically create a migration script on the schema change that requires it or SQL Compare will override all changes.

EF Code First rollback database table design

I accidentally deleted my database tables and I need to get them back. I have tried running update-database, but I only get:
Cannot find the object "dbo.ArticleComments" because it does not exist or you do not have permissions.
I also tried running Update-Database -TargetMigration:"name_of_migration" with the migration name but resulted in:
Cannot find the object "dbo.ArticleComments" because it does not exist or you do not have permissions.
I need to know how to get my database tables back with their columns (empty or not I don't care)
This may be the issue on your situation.
check about this problematic table dbo.ArticleComments.If you renamed or deleted it,then it'll give above kind of error.B'cos when you created the old migration script that was there.Now it's not there.When you try to run the same old migration script, now that table is not on your DbSet or having with different name.
Solution :
If that is the case,then you have to manually edit your migration file to reflect the current table changes.

Load data from excel file into a temporary table

I needed to load 100,000 rows of data from an excel file into a temporary table that I created using "on commit preserve rows". But somehow the most efficient methods did not seem to populate the temporary table due to session issues?
I used Toad to Import Table Data and it showed that x amount of records are imported. But when I select from the temp table, it was empty. Then I generated a bunch of insert scripts and saved them in a notepad.sql and called it from toad editor using #/script/location/notepad.sql and hit F5. It ran and showed how many records were inserted. Again the temp table was somehow still empty. So, I decided to run a random insert script manually in the editor and it showed up in the temp table. I believe the methods that didn't work are not considered to be the same session?
I haven't try SQLLDR but I am assuming it will not work judging from the methods I tried. Can someone confirm? I can't access SQLLDR so I won't know.
Is there anyway to get this to work? I can't run the insert scripts manually. That will be time consuming and Toad can't take that many scripts at the same time.
Oracle temp tables created with ON COMMIT PRESERVE ROWS are session-specific, so the data put into them is only visible within a single session, and for the duration of that session. Toad may be creating a separate session for each window and thus data which is populated from one window/session isn't visible from another window/session. The fact that you can run an insert script and then select the data back suggests this may be the case if both operations were done from the same window. I expect you'd see the same behavior if you used SQL*Loader to load the tables because the load would run in one session and the data would be discarded when the session terminated. Best of luck.

Why does a temp table work but not a permanent table?

I've written a SQL query for a report that creates a permanent table and then performs a bunch of inserts and updates to get all the data, according to company policy. It runs fine in SQL Server Management Studio and in Crystal Reports 2008 on my machine. However, when I schedule it to run on the server with SAP BusinessObjects Central Management Console, it fails with the error "Associated statement not prepared."
I have found that changing this permanent table to be a temp table makes the query work. Why would this be?
Some research shows that this error is sometimes sent instead of the true error. Other people reporting it talk of foreign key and (I would also assume) duplicate key errors.
Things I would check:
Does your permanent table have any unique constraints that might be violated? Or any foreign key constraints?
Are you creating indexes on the table after it has been created?
Are you creating any views over this permanent table?
What happens if the table already exists before the job is run?
What happens to the table if the job fails?
Are there any intermediate steps (such as within a stored procedure) that might involve additional temp or permanent tables?
ETA: Also check what schema the permanent table belongs to: is it usually created with "dbo"? Are you specifying that explicitly? Is there any chance that there might be a permissions problem?
That is often a generic error. Are you able to run it on the server as the account that it is scheduled to run as? It is most likely a permission error or constraint issue.
Assuming you really need a regular table, why it's not possible to create the permanent table once, vs creating it every time you run the query?
Recreating regular user table each time query runs does not seem right. But to make it work you may try to recreate the table in a separate batch or query (e.g. put GO in the script, that splits it into separate queries).
Regarding why it happens, I'm thinking about statement caching. Server compiles the query and stores the result for some time in case same query has to run again. So it's my speculation that it tries to run the compiled query which refers to the table you have already dropped and recreated under the same name. Name is the same, but physically it's a new table. You could hit some bug in the server this way. Just a speculation, it can be different kind of problem.
Without seeing code it's a guess, but being that you are creating a permanent table everytime you run the report, I assume you must be dropping the table at some point? (Or you'd have a LOT of tables building up over time.)
I suggest a couple angles to consider:
1) Make certain to prefix tables (perhaps by a session ID or soemthing) if you are concerned about concurrency/locking issues and the like so each report run has a table exclusive to itself.
2) If you are dropping the table at the end, instead adjust your logic to leave the table be. Write code that drops when you (re)start the operation. It's possible the report is clinging to the table and you are destroying it prematurely.