Can I change the data source for my output jobs in Upsolver - sqlake

Upsolver output is delayed/stuck - We changed the data source for some outputs to a new one. The odd thing is that the "delay" column in the outputs page seems to point to the old data sources and delay is not changing. Why is this happening, shouldn't the output now reflect the new data source and proceed ingesting data from it.

Did you by any chance stop the "old" data source before the old output version has managed to stop? In general when you edit an output and change the source, you need to let the old data source run until you see that the version has completed. This is because we still have tasks that we need to finish running until the time the datasource was stopped.
So, please run the old datasources until you see under the version history tab that the previous version is completed. This should change the delay from the previous version to the current version and you will see the correct delays.
For example if you goto the output, under Version History tab, it should show two versions. Please rerun it until version 1 has completed

Related

ADF Azure data Factory debug not running saved changes

Anyone see this behavior? For example here is my code in an activity....#{concat(
substring(activity('GetMaxDate').output.firstRow.MAX_DATE,0,4)
This IS saved. Multiple times. But when I run in debug this is what is run...
#{concat(\n substring(activity('GetMaxDate').output.firstRow.MAX_DATE,1,4)\n ,'
It's running the prior version (0,4) instead of the new version (1,4). I first noticed this because I changed the name of the activity and debug still ran the old name. This seems like new problem I've not had before. If I publish and run it as trigger it picks up the change. It's just debug that's not picking it up. This seems an inexcusable bug. This is 101 functionality folks.
Any suggestions? Should this be logged with Microsoft as bug?
Additinal option to Gary's comment:
C) Rename your pipeline, save, run debug. Rename back after.
This worked for me.
Seen this cache behavior in the past. Preview query shows cached data from source table even though the source table data was completely changed.
Deleting the pipeline,dataset.. and creating new pipeline solved the issue for me.
Seems this happenens when the debug is being used too many times. Recommend to log this behavior as a bug.

Changes doesn't reflect on run time (code and design)

Im developing a system using vb2022 its almost done but today the changes I made doesn't reflect on run time.
For example I added a button but when I run the program the button will not show.
Same as the code, i can write code but it doesnt reflect on run time
Plus im trying to change the startup form into form 2 (initial is form 1) but it doesn't reflect.
What should I do? I didn't change anything I just opened the file and write code. But suddenly this happened.
Most likely your build is failing and VS is automatically running the old output. When that happens, VS will prompt you whether to run the old output or not by default, but many people tell VS not to prompt them again without actually reading the dialogue. Use the Build menu to build your project/solution and pay attention to the Output and Error List windows to see whether it failed or not.
It may be that the compilation is succeeding but VS simply can't overwrite the output files because they are locked, which does happen sometimes. In that case, just delete the entire obj and bin folders from your project folder. You may need to close VS to do so. The next time you build, new output will be created and run.
If this happens regularly then you should probably repair VS and, if it continues after that, reinstall.

Nifi GetSolr processor not working

When run the GetSolr processor using Nifi for the first time it extracts the data from Solr.
But, If I run it for second time it doesn't fetches the data from Solr.
Could anybody please help me regarding this.
It is meant to do incremental extraction, so after the first time it would be getting only data that is newer than the data previously fetched.
If you right click on the processor and view state, there is an option to clear state which should set it back to the beginning.
If this does not answer your question, please show how you have configured GetSolr and which version of NiFi you are using.

Accurev: Restore workspace to older version

I'm a brand new to Accurev and I'm having many troubles with it. One of the developers I'm working with has promoted bad code (things are now broken that weren't before) for 2 months on a stream, and I'm wanting to get a copy of the original code before any changes were made to it.
I currently have a workspace, and whenever the other developer creates code, I pull his changes into this workspace attempting to fix the bugs. These changes are promoted to an existing issue within Accurev.
Is there any way I can perhaps create a second workspace and obtain a copy of the original code (before any changes were made)? My target date is March 14th.
I would suggest you revert or demote the bad code that was promoted into the stream (Depending on what version of AccuRev you are using). This would put the stream back into the state it was before the promotion occurred.
Below are some suggested readings on the related topics.
Best way to "un-promote" files in Accurev?
https://community.microfocus.com/borland/managetrack/accurev/w/wiki/26745/purge-revert-and-demote
https://community.microfocus.com/borland/managetrack/accurev/w/accurev_knowledge_base/25951/how-to-revert-changes-in-a-stream
https://community.microfocus.com/borland/managetrack/accurev/w/accurev_knowledge_base/26079/what-is-the-proper-way-to-revert-by-change-package
As an alternative, you could create a time-based stream below the one with the bad code. Set a time basis that predates the bad promote.
To do this, I right-clicked the stream >> New Snapshot.
I select "Specified" and enter the date (with a relative time).
From the Snapshot, I created a New Workspace which was then populated with previous code.
Hope this helps!

SSIS Package Not Populating Any Results

I'm trying to load data from my database into an excel file of a standard template. The package is ready and it's running, throwing a couple of validation warnings stating that truncation may occur because my template has fields of a slightly smaller size than the DB columns i've matched them to.
However, no data is getting populated to my excel sheet.
No errors are reported, and when I click preview for my OLE DB source, it's showing me rows of results. None of these are getting populated into my excel sheet though.
You should first make sure that you have data coming through the pipeline. In the arrow connecting your Source task to Destination task (I'm assuming you don't have any steps between), double click and you'll open the Data Flow Path Editor. Click on Data Viewer, then Add and click OK. That will allow you to see what is moving through the pipeline.
Something to consider with Excel is that is prefers Unicode data types to Non-Unicode. Chances are you have a database collation that is Non-Unicode, so you might have to convert the values in a Data Conversion task.
ALSO, you may need to force the package to execute in 32bit runtime. The VS application develops in a 32bit environment, so the drivers you have visibility to are 32bit. If there is no 64bit equivalent, it will break when you try and run the package. Right click on your project and click Properties and under the Debug menu you'll need to change the setting Run64BitRuntime to FALSE.
you dont provide much informatiom. Add a Data View between your source and your excel destination to see if data is passing through. Do do it, just double click the data flow path, select data view and then add a grid.
Run your app. If you see data, provide more details so we can help you
Couple of questions that may lead to an answer:
Have you checked that data is actually passed through the SSIS package at run time?
Have you double checked your mapping?
Try converting within the package so you don't have the truncation issue
If you add some more details about what you're running, I may be able do give a better answer.
EDIT: Considering what you wrote in your comment, I'd defiantly try the third option. Let us know if this doesn't solve the problem.
Just as an assist for anyone else running into this - I had a similar issue and beat my head against the wall for a long time before I found out what was going on. My export WAS writing data to the file, but because I was using a template file as the destination, and that template file had previous data that had been deleted, the process was appending the data BELOW the previously used rows. So, I was writing out three lines of data, for example, but the data did not start until row 344!!!
The solution was to select the entire spreadsheet in my template file, and delete every bit of it so that I had a completely clean sheet to begin with. I then added my header lines to the clean sheet and saved it. Then I ran the data flow task and...ta-daa!!! Perfect export!
Hopefully this will help some poor soul who runs into this same issue in the future!