Reference of previous activity without naming it - azure-data-factory-2

In ADFv2 the expression allows referencing the activity by #activity('ActivityName') which can be used to get the output or error. Is it possible to get a reference to the previous activity without knowing or coding the name in the expression?

Is it possible to get a reference to the previous activity without
knowing or coding the name in the expression?
No such feature in ADF as i know. I tried to provide a property here for your reference which you may utilize: DependsOn.This property describes the dependencies between components.If multiple activities are caused by previous activity with different statuses,it will be constructed into an array.
But it is not for usage in the expression,it could be get and set in the sdk. You could refer to some snippet of this case(Python Azure Data Factory Update Pipeline) ,especially for ActivityDependency.

Related

Is the column of type JSON deprecated?

In the bigquery console, when creating a table, there used to be type JSON as an option for the column types but weirdly enought it was never present in their docs We used this column type in our production tables, and discovered later on that you can't select it in queries otherwise bigquery throws an error, and the json functions also didn't work with it. So we simply stopped using this column in the queries but they still exist in our tables.
However, in the past couple of days, all queries against this table are failing with this error 400 Json is not enabled for current project. and this column type is not present in the bigquery console anymore. It seems it was removed or deprecated? I checked the release notes, but the latest release was way before the error occured. This broke our production environment, and we couldnt even export the data because exporting gave the same error. Instead we had to use a new table without this column which meant we lost all our history.
Did anyone face the same problem with any other column types before, is it normal that a type is deprecated without users being notified beforehand. This is making me question the reliability of bigquery.
Please reach out to Google Cloud support and we will help you fix your issue with that problematic table. You may also want to try fixing it yourself using the ALTER TABLE DROP COLUMN statement that is currently in public preview [1]. This will drop the erroneous column (the data in that column only will be lost). The rest of the data will remain usable.
[1] https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#alter_table_drop_column_statement
I ran into the same error message few days ago and was surprised to read about this policy change that's not backed up by a mitigation process. My attempt to use Vlad Grachev suggestion to drop this column did not prevail, as the console does not allow to query this table (same "Json is not enabled for current project." error).
My only remediation at this point is:
build a new table where the json column is switched to type string
create a pipeline that transforms the objects to strings
migrate the data through the pipeline to the new table
In BigQuery Json data can be stored in a column type "Record.Are you referring the same by JSON column type?
BigQuery uses the RECORD (or STRUCT) type to represent nested structure. A column of RECORD type is in fact a large column containing multiple child columns. For more information Refer the link below,
Json Data in BigQuery
if you are not refering to the Record Data type, The Json Column type might be a test feature that might not dependent on deprecation scheme

ReferenceManyField, ReferenceArrayField

I'm trying to build a admin app with admin-on-rest connected to Graph.cool. I's working except the relational references. On graph.cool we set up a related field to another "type" and the created fields are array of objects with a related id prop and the related type.
But admin-on-rest spect a single array of ids. I could change my schema but it will broke my database on graph.cool.
I tried some source="??" on the component but no lucky. Any ideias ? Thanks
That should be handled by the client. I'm afraid it's not implemented yet. You're welcome to give it a try if you like. Otherwise, you'll have to wait until I get some time for it

Azure Stream Analytics -> how much control over path prefix do I really have?

I'd like to set the prefix based on some of the data coming from event hub.
My data is something like:
{"id":"1234",...}
I'd like to write a blob prefix that is something like:
foo/{id}/guid....
Ultimately I'd like to have one blob for each id. This will help how it gets consumed downstream by a couple of things.
What I don't see is a way to create prefixes that aren't related to date and time. In theory I can write another job to pull from blobs and break it up after the stream analytics step. However, it feels like SA should allow me to break it up immediately.
Any ideas?
{date} , {time} and {partition} are the only ones supported in blob output prefix. {partition} is a number.
Using a column value in blob prefix is currently not supported.
If you have a limited number of such {id}s then you could workaround by writing multiple "select --" statements with different filters writing to different outputs and hardcode the prefix in the output. Otherwise it is not possible with just ASA.
It should be noted that now you actually can do this. Not sure when it was implemented but you can now use a single property from your message as a custom partition key and the syntax is exactly as the OP has asked for: foo/{id}/something/else
More details are documented here: https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-custom-path-patterns-blob-storage-output
Key points:
Only one custom property allowed
Must be a direct reference to an existing message property (i.e. no concatenations like {prop1+prop2})
If the custom property results in too many partitions (more than 8,000) then an arbitrary number of blobs may be created for the same parition

Refering to a particular parent, in case of multiple, and then retrieving its attribute

If an sobject has more then one parent, how do we lexically refer any one its parent in order to get its attributes?
I need it to write a naming convention for that particular sobject.
The sobject and its parent sobjects are connected in the schema by code.
What I tried was something like:
{project.code}/{sobject.parent01_code}/{sobject.code}/{context}
which works but {sobject.parent01_code} is not what I want, because it is not quite self-explanatory and cryptic to be used for naming of files and directories.
I rather want to something like ../{sobject.parent01.name}/.. or ../{sobject.parent01_code.name}/.. which returns Reported Error: "too many values to unpack" error.
So how can I achieve such a thing? Given, if I am not wrong, the absence of a full brunt expression language in the setting of naming conventions, which, if a present, would enable something like #SOBJECT(parent01["code", {sobject.parent01_code}]).name.
These are two separate questions but put into one because it relates to one particular problem.
This used to be unsolved in earlier versions of tactic. In 4.1 however, you can also use the expression language in the braces.
ex: {project.code}/{#GET(example/some_stype.name)}/versions
If some_stype is the parent of to the current stype's, you'll get the corresponding sobject's name.

NHibernate: Return A Constant In HQL

I need to return a constant from an HQL query in NHIbernate
SELECT new NDI.SomeQueryItem(user, account, " + someNumber + ")
FROM NDI.SomeObject object
I am trying for something like above. I've tried this:
SELECT new NDI.SomeQueryItem(user, account, :someNumber)
FROM NDI.SomeObject object
And then later:
.SetParameter("someNumber", 1).List<SomeQueryItem>();
But in the first case I get a 'Undefined alias or unknown mapping 1'. Which makes some sense since it probably thinks the 1 is an alias.
For the second I get a 'Undefined alias or unknown mapping :someNumber' which again makes some sense if it never set the parameter.
I have to believe there's some way to do this.
Please feel free to continue to believe there is some way to do this - but with HQL there isn't!
Why would you want to anyway? If you want to update the value this property to the value you specify, then do so after you've loaded the objects. Alternatively, if your result set doesn't quite match to your objects, you could alway use a SQL query (which you can still do via an NHibernate session). But the purpose of NHibernate is to map what's in your database onto objects, so specifying a manual override like this is quite rightly not allowed.
It sounds like there is a (small?) disconnect between your domain objects and your database model. What about creating a small "DTO" object to bridge this gap?
Have your query return a list of SomeQueryItemDTO (or whatever you want to call it) which, due to the naming, you know is not a true part of your domain. Then have some function to process the list and build a list of true SomeQueryItem objects by incorporating the data that is extraneous to the database.
If you're already using the Repository Pattern, this should be easier since all the ugly details are hidden inside of your repository.