SSAS Processing Cubes - Won't work in powershell but works in Visual Studio - sql

I'm attempting to process cubes and dimensions in powershell. It has been working for awhile but all of a sudden it stops. I can process the dimensions and cubes in visual studio but processing them with a powershell script in the same order gives me a duplicate attribute key error.
Powershell Script:
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices")
$serverAS = New-Object Microsoft.AnalysisServices.Server
$serverAS.connect("SERVER")
$db = $serverAS.databases["ANALYSIS DB"]
$db.cubes | select name, storagemode, lastprocessed
$db.dimensions | select name, isparentchild, lastprocessed, storagemode
Foreach ($c in $db.dimensions) {$c.process("ProcessFull")}
Foreach ($c in $db.cubes) {$c.process("ProcessFull")}

You need to ignore the key errors like this:
## Set up the Error Configuration so that Key Errors are ignored
$errorConfig = New-Object `
Microsoft.AnalysisServices.ErrorConfiguration("D:\ProcessLogs\")
$errorConfig.KeyNotFound = "ReportAndContinue"
$errorConfig.KeyErrorAction = "ConvertToUnknown"
$errorConfig.KeyErrorLimit = -1
and then process with this error config parameter:
## Process the current database
$c.Process("ProcessFull", $errorConfig)
Reference and Example:
http://www.biadmin.com/2012/07/bi-admin-scripts-process-ssas-database.html

Thanks for the response. I was actually able to get around this by using SSDT and Integration Services to process Dimensions and Cubes.

Related

Why Get-AzDataFactoryV2PipelineRun doesn't return InProgress pipeline runs?

I am running Azure Function App with the following PowerShell script to get ADF pipeline runs. It is returning all history of pipeline runs, but it is skipping the Active( In Process) run. It is returning Completed/Failed/Cancelled runs only. Microsoft Documentation shows that it should also return InProgress status as well. I triggered the ADF and made sure that it is running and this script only returned runs starting from the last completed run and previous runs, but completely ignored Active/InProgress run.
This is the script:
#Input Parameters
$ResourceGroupName = 'ABC'
$DataFactoryName = 'XYZ'
$before = (get-date).AddDays(1)
$after = (get-date).AddDays(-1)
#Get Pipeline Run History
$runIds = Get-AzDataFactoryV2PipelineRun -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName -LastUpdatedAfter $after -LastUpdatedBefore $before
#Return all Retrieved Statuses
$runIds | ForEach-Object {
## Check for all statuses
Write-Host $_.Status
Write-Host $_.RunStart
Write-Host $_.RunEnd
Is this bug or am I missing anything here?
The code looks fine. I tested at my end it works fine. I would give it a shot by widening the timeframe.
$before = (get-date)
$after = (get-date).AddDays(-5)
Also, ensure the pipeline run is not getting completed before the execution of the script.
I figured out the issue. The issue was data object it was returning was sorted different that I expected. Because, Active Pipeline Runs have null RunEnd , it was putting Active Pipelines at the end of the returned rows. So, it was showing all previous runs sorted from most recent to previous ones (sorted desc) but then it at the end it was returning Active runs from most recent to previous ones again.

Microsoft Access Database Engine causing fatal communications error with Windows Process Activation

I'm using the Access Database Engine Redistributable to read an Access database in my .net application and for some reason whenever I use a join to query data from the access database to fill a datatable it causes a fatal communications error with the Windows Process Activation Service. I can populate datatables without issue as long as there is only one table. As soon as I add just one join, I get the system error. There are no errors in my application to trap, the system error occurs and then the application stops processing. This is only happening on one server. My local computer doesn't seem to have this issue. I'm using the exact same redistributable on my local computer and the server. Two things boggle my thinking why does a join cause an issue and why does it cause a system error and doesn't push it up to the app? If I'm using a single table query, it works fine.
Steps to populate datatable:
accessConnection = New
System.Data.OleDb.OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0; Data Source='" &
uploadedFileName & " '; Persist Security Info=False;")
accessDataAdapter = New System.Data.OleDb.OleDbDataAdapter("select * from Table1 INNER JOIN
Table2 ON Table1.PK = Table2.PK", accessConnection)
accessDtSet = New System.Data.DataSet
accessDataAdapter.Fill(accessDtSet) - Application fails here
accessView = accessDtSet.Tables("Locations").DefaultView
This is just a guess but maybe try the following provider in your connection string:
Provider=Microsoft.Jet.OLEDB.4.0

SQL Server job step retry attempts for specific error

I have a SQL job (4 steps job consist of SSIS packages) which runs on daily basis and extract data from various sources (source1, source2, source3) then it loads data to warehouse. Now my job fails due to 'Communication Link failure' with source1 at step 1.
Is there any way I can set up retry attempt SQL job based on this above error only.
For example, if I get error 'primary key violation' or some other data related issue then we should directly get notification that job failed but if we have error 'Communication Link failure' then step1 should do retry attempt.
Any suggestion would be appreciated.
Short answer: No, not with the SQL agent.
Longer answer: Maybe you can build up som logic where the package checks if the previous error was that specific error you're looking for, if then, execute again. Cumbersome but possible.
You can create an Event Handler for the OnError event with a Script Task which will check for this specific error and execute msdb.dbo.sp_start_job if this error occurred. Since I'm not sure the exact error code you're getting, this only checks the #[System::ErrorDescription] system variable for the specific text using the StringComparison.CurrentCultureIgnoreCase option to make this match case insensitive. However I would strongly recommend finding the exact error code and using the #[System::ErrorCode] variable to verify this instead. I'd also suggest only retrying the job a certain number of times or within a given time-frame to avoid excessive failures if this issue persists as well.
string errorMsg = Dts.Variables["System::ErrorDescription"].Value.ToString();
if (errorMsg.IndexOf("Communication Link failure", 0, StringComparison.CurrentCultureIgnoreCase) >= 0)
{
string connString = #"YourConnectionString;";
string startJobCmd = #"EXEC MSDB.DBO.SP_START_JOB N'NameOfJobToRetry;";
using (SqlConnection conn = new SqlConnection(connString))
{
SqlCommand sql = new SqlCommand(startJobCmd, conn);
conn.Open();
sql.ExecuteNonQuery();
}
}

How can I get the Last Processed timestamp for an SSAS tabular cube?

In SSMS I have connected to a SSAS tabular cube. When I view the properties screen I see the Last Processed timestamp of 11/24/2015 2:59:20 PM.
If I use SELECT LAST_DATA_UPDATE FROM $system.MDSchema_Cubes I see a timestamp of 11/25/2015 12:13:28 PM (if I adjust for the timezone).
If I open up the partitions screen for one of the tables in my cube I see that the most Last Processed timestamp is 11/25/2015 12:13:28 PM which matches the value from the DMV.
I want the Last Processed timestamp for my BISM, the one from the Database Properties screen, not the one from a partition that happened to be processed later.
Is there a way to get this programmatically?
You can use Analysis Services Stored Procedure assembly that you can download from here.
Once you get the assembly file that corresponds to your Analysis Server version connect to your instance via SSMS.
Look for your your Database (Database Cube)
Go to Assemblies folder
Right click and New Assembly...
Browse and select the assembly.
Set the permissions as described in the documentation assembly
Once you have imported the assembly use this MDX query to get the last processed timestamp.
--
with member [Measures].[LastProcessed] as ASSP.GetCubeLastProcessedDate()
select [Measures].[LastProcessed] on 0
from [Armetales DWH]
Let me know if this can help you.
After looking at the code in the Analysis Services Stored Procedure assembly I was able to put together a powershell script that got the date I was looking for. Here is the code:
#we want to always stop the script if any error occurs
$ErrorActionPreference = "Stop"
$error.Clear()
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices") | Out-Null
$databases = #('BISM1', 'BISM2')
$servers = #('Server1\BISM', 'Server2\BISM')
function Get-BISMLastProcessed
{
param(
[string] $connStr
)
Begin {
$server = New-Object Microsoft.AnalysisServices.Server
$server.Connect($connStr)
}
Process {
Try {
$database = $server.Databases.GetByName($_)
Write-Host " Database [$($database.Name)] was last processed $($database.LastProcessed)"
}
Catch [System.Exception] {
Write-Host $Error[0].Exception
}
Finally {
if ($database -ne $null) {
$database.Dispose()
}
}
}
End {
$server.Dispose()
}
}
foreach ($server in $servers) {
$connectStr = "Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=BISM1;Data Source=$server"
Write-Host "Server [$server]"
$databases | Get-BISMLastProcessed $connectStr
Write-Host "----------------"
}
The results are:
Server [Server1\BISM]
Database [BISM1] was last processed 11/30/2015 12:25:48
Database [BISM2] was last processed 12/01/2015 15:53:56
----------------
Server [Server2\BISM]
Database [BISM1] was last processed 11/30/2015 12:19:32
Database [BISM2] was last processed 11/02/2015 23:46:34
----------------

Use Multiple DBs With One Redis Lua Script?

Is it possible to have one Redis Lua script hit more than one database? I currently have information of one type in DB 0 and information of another type in DB 1. My normal workflow is doing updates on DB 1 based on an API call along with meta information from DB 0. I'd love to do everything in one Lua script, but can't figure out how to hit multiple dbs. I'm doing this in Python using redis-py:
lua_script(keys=some_keys,
args=some_args,
client=some_client)
Since the client implies a specific db, I'm stuck. Ideas?
It is usually a wrong idea to put related data in different Redis databases. There is almost no benefit compared to defining namespaces by key naming conventions (no extra granularity regarding security, persistence, expiration management, etc ...). And a major drawback is the clients have to manually handle the selection of the correct database, which is error prone for clients targeting multiple databases at the same time.
Now, if you still want to use multiple databases, there is a way to make it work with redis-py and Lua scripting.
redis-py does not define a wrapper for the SELECT command (normally used to switch the current database), because of the underlying thread-safe connection pool implementation. But nothing prevents you to call SELECT from a Lua script.
Consider the following example:
$ redis-cli
SELECT 0
SET mykey db0
SELECT 1
SET mykey db1
The following script displays the value of mykey in the 2 databases from the same client connection.
import redis
pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
r = redis.Redis(connection_pool=pool)
lua1 = """
redis.call("select", ARGV[1])
return redis.call("get",KEYS[1])
"""
script1 = r.register_script(lua1)
lua2 = """
redis.call("select", ARGV[1])
local ret = redis.call("get",KEYS[1])
redis.call("select", ARGV[2])
return ret
"""
script2 = r.register_script(lua2)
print r.get("mykey")
print script2( keys=["mykey"], args = [1,0] )
print r.get("mykey"), "ok"
print
print r.get("mykey")
print script1( keys=["mykey"], args = [1] )
print r.get("mykey"), "misleading !!!"
Script lua1 is naive: it just selects a given database before returning the value. Its usage is misleading, because after its execution, the current database associated to the connection has changed. Don't do this.
Script lua2 is much better. It takes the target database and the current database as parameters. It makes sure that the current database is reactivated before the end of the script, so that next command applied on the connection still run in the correct database.
Unfortunately, there is no command to guess the current database in the Lua script, so the client has to provide it systematically. Please note the Lua script must reset the current database at the end whatever happens (even in case of previous error), so it makes complex scripts cumbersome and awkward.