ADF Azure data Factory debug not running saved changes - azure-data-factory-2

Anyone see this behavior? For example here is my code in an activity....#{concat(
substring(activity('GetMaxDate').output.firstRow.MAX_DATE,0,4)
This IS saved. Multiple times. But when I run in debug this is what is run...
#{concat(\n substring(activity('GetMaxDate').output.firstRow.MAX_DATE,1,4)\n ,'
It's running the prior version (0,4) instead of the new version (1,4). I first noticed this because I changed the name of the activity and debug still ran the old name. This seems like new problem I've not had before. If I publish and run it as trigger it picks up the change. It's just debug that's not picking it up. This seems an inexcusable bug. This is 101 functionality folks.
Any suggestions? Should this be logged with Microsoft as bug?

Additinal option to Gary's comment:
C) Rename your pipeline, save, run debug. Rename back after.
This worked for me.

Seen this cache behavior in the past. Preview query shows cached data from source table even though the source table data was completely changed.
Deleting the pipeline,dataset.. and creating new pipeline solved the issue for me.
Seems this happenens when the debug is being used too many times. Recommend to log this behavior as a bug.

Related

Run flowtype checker manually

I have the IDEA Ultimate 2018.1 with flowtype (flow-bin) configured and all the checkboxes selected. I followed this guide: https://www.jetbrains.com/help/idea/2017.2/flow-type-checker.html
The type checking needs much time to be executed. I change something in my code (reverting a wrong annotation, or creating a wrong one), and I need to wait around 30 seconds to get the correct annotation, this is, IDEA triggers the flow server to analyse the files and modify the editor accordingly. That is quite a lot.
Can I trigger that type checking analysis manually inside IDEA to get the editor updated? Or can I change the auto-running interval?
As Kraus noticed, my version of flow-bin was old.
I was using the version 0.26.0 instead of the new 0.74.0, mainly because when I updated flow I was not using flow-bin but flow...
Thanks. Now IDEA and flow are fast.

TFS - reconstructing lost overwritten code?

I was working on a solution and under the assumption that I had already checked in my changes, I pulled down a new version of the solution and all the changes disappeared.
One of my colleagues suggested that as I often ran it in debug mode that there might be a dll kicking about that I can reverse engineer but dlls all seem to have overwritten too.
This is about 2 weeks worth of work so any help would be appreciated.
If you didn't shelve or check in the changes before the overwritten, I'am afraid you'll not be able to revert the file.
However when you edit the files in local, the changes will be temporarily saved to the TFS temporary diff files in "C:\Users\{user}\AppData\Local\Temp\TFSTemp". The files all have names like "vctmp38604_939733.cs". You can get the changes from them.
So, you can have a check for the folder, hopefully the diff files still there.
Just a suggestion: please ALWAYS shelve or check in code in time in case missing the changes.

BI Publisher - Fail to load and save data model

Started BI Publisher about a week ago.
When working on a new data model, about one or two queries in, I get this error when I try to save:
Failed to load servlet/res?s=%252F~developer1%252Ftest%252FJustin%2520Tests%252FOSRP%2520Information.xdm&desc=&_sTkn=9ba70c01152efbcb413.
I can no longer save my data model.
I tried deleting my queries, logging in and out, turning machine off and on, but no luck.
I'm currently resolved to saving all of my queries locally in notepad.
I can create a whole new data model and it will save fine, but then after two or three queries the same thing happens.
What's going on and why would anyone design such a confusing error message?
Any help would be greatly appreciated.
After restarting your server once you won't get this issue.It happens some time due to the connection problem.so restart should work for this.It resolved my problem.
None of the proposed solutions worked for me. I found out, on my own, that any unnecessary brackets around CASE in a select statement will cause this error. Remove the unnecessary brackets and the error goes away.
Oracle meta link Doc ID 2173333.1. In BI Publisher releases 11.1.1.8.x and up, there is an option to Manage Cache in the Administration section of BIP. This option was also added to 11.1.1.7 in patch 140715 (11.1.1.7.140715).
Clearing the object cache will resolve the saving errors:
Click on the Administration link
Manage BI Publisher
Manage Cache
Click on the 'Clear Object Cache'

dispatching started for transformation

When I preview rows in Text file Input control of Pentaho, no rows appear and 'Show log' option displays this message
"Dispatching started for transformation".
What does it mean? How to overcome this issue?
It seems that either your transformation is invalid (you're missing one essential checkbox or another) or your PDI installation isn't working properly.
Which JAVA version are you using? And which PDI version? Try it on a fresh install and if it still doesn't work, go over your text file input step and validate that it's correctly configured.
Also, try removing all other steps, it could be that one of the subsequent steps is the one causing problems and stopping PDI from starting the transformation execution.
Well... maybe it's quite late, but I'm currently struggling with this issue in the Pentaho Community Version 8.
What I found, and solved some of my issues is that this message can be a potential warning for a Deadlock process. You have to be sure that none of this situations are present in your code:
An external component like a table lock by the database blocks the transformation.
The "Block this step until steps finish" step might run into a deadlock when there are more rows to process than the number of Rows in Rowset.
Within transformations there are situations when streams get split and joined again, so that the transformation blocks by design.
You could see full examples in the Jira Pentaho documentation page:
https://pentaho-community.atlassian.net/wiki/spaces/EAI/pages/386807182/Transformation+Deadlocks
I hope that it will help you!

Toad: Table Autocomplete Functionality Not Working

I've been using Toad for more than a year now without problems. All of a sudden the table autocomplete feature has ceased working. No settings have been changed, and I've clean installed a new TOAD version, yet the problem persists.
The image below shows autocomplete defaulting into view IN_INSTRUMENT in schema MCDM. Normal behaviour should result in a table/view list.
It is notable that the above does not happen with all schemas. For some schemas I will still see a table list. In the beginning this error happened only with a single schema. Now it is slowly progressing to other schemas as well, which is exceptionally frustrtating when you're dealing with dozens of schemas that contain hundreds of tables each. It slows down development when you must open Schema Browser and look for the exact table/view/procedure/package each time instead of letting autocomplete give suggestions.
This same issue has been described in this thread and and this thread with less detail, yet no accepted answer has been given.
As can be seen from Code Assist settings, these should be in order.
How to reset autocomplete behaviour into what is its original state?
Under View > Toad Options > Editor > Code Assist > Toad Insight Objects, checking Synonyms (in addition to Public Synonyms) worked for me.
I've found the solution to this problem. The issue was a corrupt configuration file. For anyone else with the same problem, this is how I fixed mine:
Backup your appdata folder - you can find its location in options -> general -> application data.
Create a new set of user files using Utilities -> Copy User Settings -> Create clean set of user files. Make sure you are running Toad with administrator rights.
Note that the above will delete all your saved connection details (schema names, passwords, connection strings), so take a note of these.
Hope this helps someone in the future.
Try to check "Public synonyms" in the "Toad insight objects".
Go to Toad Options, and then Editor > Code Assist and uncheck "Cache Code Insight results". This made it immediately start working for me using Toad for Oracle version 9.7. I could then go back and check the box and it would still work.
I faced the same problem even though I set up everything mentioned above.
So basically Toad does not suggest column names if I don't define the scheme name.
X(schema).TABLE_NAME.(then lists all columns)
Worked for me..