TFS 2010 Analysis Cube's "File" Dimension isn't Deleted on Rebuild - ssas

I have a problem with the Analysis cube - it keeps growing. I have managed to pinpoint it to a single dimension, the Version Control File dimension (id = file), which keeps growing. The problem’s symptoms are that with (almost) every rebuild, the old files (the .fact.map and the .fact.map.hdr files) do not get deleted. Over time this accumulates, and I’m reaching the point where the database reaches an unreasonable size of 146 GB, where the file dimension accounts for 145 GB of it!
My guess is that some error occurs, almost every rebuild which causes the operation to stop before it gets to its cleanup phase.
I know how to fix the symptoms, by deleting and manually recreating the analysis cube, but I need to understand why this happens and how to fix it once and for all.
Does anybody know how to explain / fix the problem?

What version of SQL Server do you have installed? I believe this may have been fixed in the latest CU for 2008 R2.
http://support.microsoft.com/kb/979778

Related

Visual Studio 2022 - Combability Mode 1600 error

I upgraded my SSAS tabular cube project, compatibility mode to 1600 after adding some partitions to my Fact table, however at the deployment I've received this error message :
Cannot deploy metadata. Reason: Failed to save modifications to the server. The error returned: 'The operation cannot be performed because it references an object or property that is unavailable in the current edition of the server or the compatibility level of the database.
please help or advise.
After playing with almost all the properties on the Cube I got to nowhere! Finally, I used Git to revert my changes back and I tried to do all my changes one at a time and then Build\Deploy. In the end, I discovered if you do all your Cube project changes and deploy them first and, at the very last step do the Compatibility Mode update to 1600 then deploy again to your server (mine is Azure Analysis Services) it will work out with no errors and can build and deploy.
It's a bug with VS 2022 Compatibility Mode 1600 and all are talking about it. Hope Microsoft techs before giving a negative point to my question investigate first and then mark a negative point on my question! Please Stop immediately ignoring the question!

Recent rash of Microsoft Access database files in an inconsistent state

A large number of our clients operating a split front end/back end Microsoft Access application we built are encountering frequent but intermittent database file corruption issues. When the back end file is opened this message appears: "Microsoft Access has detected that this database is in an inconsistent state, and will attempt to recover the database … "
Opening the database with DAO using Visual Basic code results in error code 3343, "Unrecognized database format."
The repair attempt succeeds and we have not witnessed any data loss or dropping of primary keys, indexes, or relationships. Most cases involve where the back end file is located on a shared network drive. Some searches suggest that the latest Windows 10 update 1803 is suspect. Has anybody else encountered this?
It has recently been reported several times. A very thorough coverage of this issue can be found here.
Strangely, the cure can - at least for some cases - be found in old support threads:
Moved to Server 2012 getting Access Database Corruption
Cannot access shared files or folders on a drive in Windows Server 2012 or Windows Server 2012 R2
Comments:
It’s a bit strange though as the patch to fix the issue is back in May
2014 which is already installed on the server.
I can only think that
something in the latest Windows 10 Build 1803 has brought up the issue
again as it was PC’s that are running that build were causing the
problem.
The fix is adding the following entry into Vospers Server
2012 R2 registry:
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters
Value: DisableLeasing
Type: DWORD
Data: 0x1
We testing this on our server and the problem went away. As soon as we
changed the ‘Disable leasing’ value to ‘0’ again, the problem
returned.
I can’t find a reasonable explanation yet as to why this
has started to happen last week but if it works and doesn’t cause any
further issues elsewhere then I’m ok with that.
Also note that homegroups have been removed from Win10 1803. This may affect rights on shared folders.

CloverETL DBOutputTable - Update Statement Won't Complete

I'm running a process that updates flags in a SQL Server database table. Essentially, the graph reads a .csv file, then uses the variables in the update statement. The universal reader is completing but the DBOutputTable component is hanging and won't complete. The funny thing is that earlier in the graph there's another DBOutputTable component that does almost the exact same thing and finishes successfully. Does anyone know what the issue could be?
I've restarted the services and the server itself. This process typically completes without issue but it just started hanging a few days ago.
I would guess non existing or not sufficient index on that table. That would manifest after a while, with larger data set.
Double check that previous DBOutputTable is doing update on same fields.

SQLServer 2012 Installed copy showing problems:

My problem is like this: I had a copy of SqlServer 2012 installed on my machine. It's been there for over 3 years without any glitches at all. Just 4-5 days ago, a problem sprouted up. When I started Management Studio it told me that
msdb got corrupted so it cannot be opened.
The complete message is something like this:
Cannot display policy health state at the server level, becuase the user doesn't have permission. Permission to the database msdb is required for this feature to work correctly.
So what could be wrong here? What sudden changes/anomalies could have crept in that has made this unstable? Someone told me it could be due to a wide range of possibilities. The reason could be anything. Even some nuget packages affect the database. Initially I though this could have been an issue with login, permissions etc. So I tried to run as administrator also. No, it did not cure this problem. If you try to create a new database it simply tells me, that I can't do it. The message is something like this:
An exception occurred while executing a T-SQL statement or batch.[Microsoft.SqlServer.ConnectionInfo]. Database msdb cannot be opened. It has been marked as SUSPECT by recovery. [Microsoft Sql Server, Error:926]
How do I recover from this? Can you provide me some guidance? Or a clue where precisely to look for the hints of problem? All my work is stalled. Any kind of assistance in recovering my ailing sqlserver installation will be humbly received.
So, I'm requesting you all to show me the way.
Thanks in anticipation.
I fixed mine with Solution C from the following website. my MSDB was corrupt and not loading so I stopped the services and replaced it with the files from the template in the SQL Server directory.
https://www.mssqltips.com/sqlservertip/3191/how-to-recover-a-suspect-msdb-database-in-sql-server/
"The templates are saved in "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\Templates" (the path varies by version and install choices, this is the default for SQL Server 2012). By shutting down the instance and replacing the bad MSDB data (msdbdata.mdf) and transaction log (msdblog.ldf) files with the template files I was able to restart the instance without error!" (just incase the website link doesn't work I have quoted it here).
Fissh
If your MSDB is corrupted, restore from your most recent backup. That's the safest thing to do and that's why we have backups to begin with.
If you do not have a backup of MSDB, you have a couple of options.
Recreate it. Detailed instructions here: https://msdn.microsoft.com/en-us/library/dd207003(v=sql.110).aspx#CreateMSDB. This is the best way to ensure you get a clean, functional MSDB and is the fastest way to get up and running again. IMPORTANT: Doing this means you lose all jobs, backup history, etc... that is stored in MSDB. Remember to recreate all maintenance jobs after you're done else you're just waiting for the next thing to fall over (e.g. transaction log backups no longer run, tlogs grow till you run out of disk space - now you can't run any queries that will commit transactions).
DBCC CHECKDB WITH Repair_allow_data_loss is another option which you'll probably find if you google/bing the issue. This might work but it is not recommended. The problem is you don't really know what will be lost. It works by deleting what it can't read then fix the links to get the database physically functional again. Once that's done, you'll have to go back and figure out what remains and is still functional. That is tedious and error prone. Besides, if you're gonna do this very thorough manual check to ensure all your jobs are intact, you're better off just re-creating them on a new, clean MSDB.

TFS2010 database size

We've been using TFS since around 2009 when we installed TFS2008. We upgraded to TFS2010 at some point and we've been using it for source control, work item management, builds etc.
Our TFSVersionControl.mdf file is 287,120,000 KB (273GB). We ran some queries and found that our tbl_BuildInformationField table is massive. It has 1,358,430,452 rows which takes up 150,988,624 KB (143GB). We have multiple active products over multiple active builds which more than one solution per build and the solutions aren't free of warning messages.
My questions:
Is it possible to stop MSBuild from spamming the
tbl_BuildInformationField table so much? I.e. only write errors and
general build information and not all the warnings for every
project?
Is there a way to purge or clean up old data from this
table?
Is 273GB for 4 years of TFS use an average size?
Is 143GB for tbl_BuildInformationField a "normal" size?
The table holds the values and output of build process. Take note that build retention policy doesnt actualy delete the build object like everything else in TFS the object is marked deleted and only public visibility and drop location is cleared.
I would suggest if you have retainened same build definitions for very long time (when build definition is deleted the related objects get removed as well) you should query for build info including deleted ones using TFS api, the same api will also alow you to remove them for good. Deleting build definitions probably will not work and will fail with timeout error.
You can consult the following:
http://blogs.msdn.com/b/adamroot/archive/2009/06/12/working-with-deleted-build-data-in-team-foundation-server-2010-beta-1.aspx