ServerName for Get-AzSqlDatabaseLongTermRetentionPolicy and Set-AzSqlDatabaseLongTermRetentionPolicy with Azure Managed Instance - azure-sql-managed-instance

I am trying to set LTR on databases in our Managed Instances via the above listed Az commandlets. The problem I am having is determining what the ServerName parameter should be. If I try to use the ManagedInstanceName or FullyQualifiedManagedInstanceName it gives me the following error:
"Can not perform requested operation on nested resource. Parent resource '[instnaceName]/[databaseName]'"
What should I use for this parameter?

The problem was that I was using the wrong commandlet. The correct commandlet for Managed Instances is Get-AzSqlInstanceDatabaseLongTermRetentionPolicy

Related

unable to connect to default instance using it's full name

I am developing an app that needs to get from a user a server name or address and a db name and build a folder structure based on that. The problem is that in order to have the same folder for all the different ways there are to get the path to the instance (localhost, ip address, etc.) i'm running the following Query:
select cast(SERVERPROPERTY('MachineName')as varchar)+'\'+##servicename
on the target server and set my folder structure based on that, and in that format saves the connection that the user gave (doesn't matter if i got ip or a server name, there is one connection string for all).
My problem is that when the instance is the default instance on the target machine, I cant seem to connect using MACHINE_NAME\MSSQLSERVER. I can only log in using the machine without instance name. So I need to either find a way to connect to the instance using its full name (preferred) or to find a way figure out if the targeted instance is the default one.
Any help would be very appreciated.
'MSSQLSERVER' is reserved for default instance, so you can
SELECT CAST(SERVERPROPERTY('MachineName')AS VARCHAR)+ CASE WHEN CHARINDEX('MSSQLSERVER', ##SERVICENAME, 1) >0 THEN '' ELSE '\'+##SERVICENAME END

use local graph in noflo.asCallback

I'm trying to execute a graph located in a local graphs/ folder using the noflo.asCallback function.
I'm getting an error that the graph or component is not available with the base specified at the baseDir.
What is the correct way to us the asCallback function with a graph?
The issue was related to the package name of the project. The name I was using was noflo-test, and the graph was available as test/GraphName, without the noflo- prepended.

Biztalk 2009: SQL to WCF-SQL adapter migration; orchestration not receiving message?

From topic,
I have a receive location that currently uses sql adapter(receive port) to call(poll?) a stored procedure. The stored proc returns a FOR XML result.
The receiver then activates an orchestration which takes the message and populate the data from the message and into some variables (expression shape).
Orchestration looks like:
LongScope[ AtomicScope[ Receive location -> Expression ] ][Error handling]
I tried a direct migration to wcf-sql with XMLpolling as InboundOperationType, but it throws a null exception during the variable assignation(I assume).
Additional detail:
I caught the message from the receiver by filtering pipelineName using a send port. There is a slight different in the message retrieved by sql and wcf-sql adapter
sql:
<rootNode xmlns="namespace"><row data1="data1" data2 = "data2" /></rootnode>
wcf-sql:
<rootNode xmlns="namespace"><row data1="data1" data2 = "data2" xmlns=""/></rootnode>
Which should do nothing, if this msdn post is correct
I also went into orchestration debugger. Weird thing is, when using sql adapter, the message is still = null, but the varibles are assigned without problem. I also tried adding a send port directly after the receive port to dump the message. Nothing came out
I would appreciate any info/suggestion/solution
Do tell me if im missing any info.
Irrelevant Info:
As of this post the receive port doesnt even trigger anymore. I dont know why. Rebooting PC.
Also I suspect Biztalk gave my bruxism and lead to me requiring 6 teeth fillings
The difference between XML in SQL en WCF-SQL has nothing to do with the MSDN post you are linking to.
In the 2nd XML (WCF-SQL adapter), the row node does not have a namespace. In the 1st XML (SQL adapter), the row node inherits the default namespace "namespace" from its parent: 'root'.
Regarding the Receive Port not triggering anymore:
Are you sure your Host Instance(s) are still running?
My solution:
I added "xmlns = 'namespace'" as a 'data' in the stored procedure.
The adapter recognized it and removed it(since it was the same as parent node), allowing me to use the old schema.
Filler:
So I generated a schema using the output from WCF-SQL adapter,
however I couldnt replace my old one with it, since the expression shape will not recognize its child elements (var = messageObject.childElement)
I created a map to map the new one back to the old one.
But that didnt work, because they both shared the same namespace, and biztalk complained during runtime that it couldnt decide which schema to use.

DBD::Oracle, Cursors and Environment under mod_perl

Need some help, because I can't find any solution for my problems with DBD::Oracle.
So at first, this is the current situation:
We are running Apache2 with mod_perl 2.0.4 at our company
Apache web server was set up with a startup script which is setting some environment variables (LD_LIBRARY_PATH, ORACLE_HOME, NLS_LANG)
In httpd.conf there are also environment variables for LD_LIBRARY_PATH and ORACLE_HOME (via SetEnv)
We are generally using the perl module DBI with driver DBD::Oracle to connect to our main database
Before we create a new instance of DBI we are setting some perl env variables, too (%ENV). We are setting ORACLE_HOME and NLS_LANG.
So far, this works fine. But now we are extending our system and need to connect to a remote database. Again, we are using DBI and DBD::Oracle. But for now there are some new conditions:
New connection must run in parallel to the existing one
TNSNAMES.ORA for the new connection is placed at a different location (not at $ORACLE_HOME.'/network/admin')
New database contents are provided by stored procedures, which we are fetching with DBD::Oracle and cursors (like explained here: https://metacpan.org/pod/DBD::Oracle#Binding-Cursors)
The stored procedures are returning object types and collection types, containing attributes of oracle type DATE
To get these dates in a readable format, we set a new env variable $ENV{NLS_DATE_FORMAT}
To ensure the date format we additionally alter the session by alter session set nls_date_format ...
Okay, this works fine, too. But only if we make a new connection on the console. New TNS location is found by the script, connection could be established and fetching data from the procedures by cursor is also working. Alle DATE types are formatted as specified.
Now, if we try to make this connection at apache environment, it fails. At first the datasource name could not resolved by DBI/DBD::Oracle. I think this is because of our new TNSNAMES.ORA file or rather the location is not found by DBI/DBD::Oracle in Apache context (published by $ENV{TNS_ADMIN}). But I don't know why???
The second problem is (if I create a dirty workaround for our first one) that the date format, published by $ENV{NLS_DATE_FORMAT} is only working on first level of our cursor select.
BEGIN OPEN :cursor FOR SELECT * FROM TABLE(stored_procedure) END;
The example above returns collection types of object which are containing date attributes. In Apache context the format published by NLS_DATE_FORMAT is not recognized. If I use a simple form of the example like this
BEGIN OPEN :cursor FOR SELECT SYSDATE FROM TABLE(stored_procedure) END;
the result (a single date field) is formatted well. So I think subordinated structures were not formatted because $ENV{NLS_DATE_FORMAT} works only in console context and not in Apache context, too.
So there must be a problem with the perl environment variables (%ENV) running under Apache and mod_perl. Maybe a problem of mod_perl?
I am at my wit's end. Maybe anyone in the whole wide world has a solution ... and excuse my english :-) If you need some further explanations, I will try to define it more precisely.
If your problem is that changes to %ENV made while processing a request don't seem to be honoured, this is because mod_perl assumes you might be running multiple threads and doesn't actually change the process environment when you change %ENV, so external libraries (like the oracle client) or child processes don't see the change.
You can work around it by first using the prefork MPM, so there aren't any threading issues, and then making changes to the environment using Env::C instead of the %ENV hash.

setting Breeze js naming conventions to "camelCase" is resulting in an error when getting metadata

I have a Web App built using breeze js to communicate with a breeze api controller on top of entity framework.
I want to have the properties names and nav properties by in camelCase. on the server they are PascalCase.
Following the instructions here I added this to my code:
breeze.NamingConvention.camelCase.setAsDefault();
As a result I am now getting an error when breeze is trying to get the metadata
Error: Metadata import failed for Breeze/ZenAPI/Metadata; Unable to process
returned metadata:NamingConvention for this server property name does not roundtrip properly:houseId-->HouseId
Things I know:
All the properties are PascalCased on the server.
There is no non default formatter set on the server.
When I look at the server response it is in Pascal.
If I take out this setting, everything works fine and the naming is Pascal.
The setting is set, meaning that when I check
breeze.NamingConvention.defaultInstance.name;
I get "camelCase";
What might be the reason for the problem?
NamingConvention.camelCase is intended for use in converting server property names that are PascalCased into CamelCased names on the client. According to the error message, you are trying to do the reverse, i.e. in your case 'houseId' is a server property name.
When metadata is being processed breeze attempts to verify that every property name can be roundtripped by passing it thru the NamingConvention.clientPropertyNameToServer method and then thru the NamingConvention.serverPropertyNameToClient method or the reverse depending on whether a client or a server name is provided within the metadata. The message you got indicates that
ServerName ClientName ServerName
---------- ---------- ---------
'houseId' -> 'houseId' -> 'HouseId' ( 'houseId' != 'HouseId');
Note that if 'HouseId' was the server name then this works just fine.
ServerName ClientName ServerName
---------- ---------- ---------
'HouseId' -> 'houseId' -> 'HouseId' ( 'HouseId' == 'HouseId');
If it turns out that you really do want 'houseId' as both the server name and the client name, then you will need to write your own NamingConvention ( which is actually pretty easy). See http://www.breezejs.com/sites/all/apidocs/classes/NamingConvention.html
I found that with Code First generation of the model with Entity Framework Powertools on EF6+ did not allow for the selection of database objects, consequently the table "sysdiagrams" came across all in lowercase instead of the Pascal Case notation I normally use for db objects. Once I removed this table from the model and context classes, then this error with breeze went away. All good. I also tested with breeze.NamingConvention.none.setAsDefault() and used Pascal case in my javascript and that worked OK too, but not preferred.