Started BI Publisher about a week ago.
When working on a new data model, about one or two queries in, I get this error when I try to save:
Failed to load servlet/res?s=%252F~developer1%252Ftest%252FJustin%2520Tests%252FOSRP%2520Information.xdm&desc=&_sTkn=9ba70c01152efbcb413.
I can no longer save my data model.
I tried deleting my queries, logging in and out, turning machine off and on, but no luck.
I'm currently resolved to saving all of my queries locally in notepad.
I can create a whole new data model and it will save fine, but then after two or three queries the same thing happens.
What's going on and why would anyone design such a confusing error message?
Any help would be greatly appreciated.
After restarting your server once you won't get this issue.It happens some time due to the connection problem.so restart should work for this.It resolved my problem.
None of the proposed solutions worked for me. I found out, on my own, that any unnecessary brackets around CASE in a select statement will cause this error. Remove the unnecessary brackets and the error goes away.
Oracle meta link Doc ID 2173333.1. In BI Publisher releases 11.1.1.8.x and up, there is an option to Manage Cache in the Administration section of BIP. This option was also added to 11.1.1.7 in patch 140715 (11.1.1.7.140715).
Clearing the object cache will resolve the saving errors:
Click on the Administration link
Manage BI Publisher
Manage Cache
Click on the 'Clear Object Cache'
Related
I have an ETL routine in PENTAHO and I'm migrating to APACHE HOP.
But I came across a situation, the HOP step/plugin "Microsoft Excel Input" cannot read the data before I open the excel file and click confirm Add Confidentiality Label.
In PENTAHO PDI this problem does not occur, does anyone have any tips?
IMG 1
After clicking and adding a confidentiality label like "public" for example and saving and closing the file, the process works perfectly.
Note: This only happens with some files.
This sounds like a problem that will not have a clear and direct answer and will require some changes in the code.
The code for Apache Hop is managed on Github.
You can create an issue there and one of the developers will help you get this sorted out. When creating a ticket please be as specific as you can be and add a sample, that will improve the chances of getting a fix on short notice.
When trying to add external data in excel, the data connection wizard does not load properly for some reason.
I select Data,> From Other sources, >From Data Connection Wizard>My Data source>table i want...
Then I have no options to set parameters, can only click "Finish" without any query set up.
Just Defaults to "Select * FROM XXXXX"..
Anyone have any ideas as to why this would be the case?
I have done this exact same process before, on multiple occasions, with to issue.
Something has changed to make this process not work properly.
There are big changes going on in the excel data input environment recently and all the process are being remodeled and reworked. I checked my excel 16 and it doesn't even list the option you mention (Data connection Wizard) anymore. I feel a strong push towards Data model which may not suit me or other, but all other connection methods are gradually ceasing to work.
I would guess you perhaps do not have a latest Excel version and so it just happens.
Apologies first of all if there is an answer to this elsewhere on the site. I've checked some of the proposed solutions and can't find anything appropriate.
So I've got this SSRS report that works fine when deployed but won't run locally during testing. The main query itself works when run in the query editor, as do all the sub queries that provide data for parameter drop lists but when I try to preview it, I get the error.
Bear in mind it used to work, up until the end of last year, which was when it was last updated.
I've tried removing all the tables and matrices on a copy (replacing with one very simple table), the parameters went too and I still get the error. I've also downloaded the server version, renamed it and redeployed it, works online, but not locally. As the error message is brutally vague, I've run out of ideas of things to try. Apart from switching over to PowerBI, can anyone think of anything else I could do to understand where the error is from?
Possibly relevant - the main query has some recursion in a subquery, but only a couple of levels. Could this be related? As I've said before, it used to work...
PS I'm using VS 16.7.2 from server V13.0.4466.4
PPS I also added the query to a brand new report and it errored so I think it must be something related to the SQL itself?
When I preview rows in Text file Input control of Pentaho, no rows appear and 'Show log' option displays this message
"Dispatching started for transformation".
What does it mean? How to overcome this issue?
It seems that either your transformation is invalid (you're missing one essential checkbox or another) or your PDI installation isn't working properly.
Which JAVA version are you using? And which PDI version? Try it on a fresh install and if it still doesn't work, go over your text file input step and validate that it's correctly configured.
Also, try removing all other steps, it could be that one of the subsequent steps is the one causing problems and stopping PDI from starting the transformation execution.
Well... maybe it's quite late, but I'm currently struggling with this issue in the Pentaho Community Version 8.
What I found, and solved some of my issues is that this message can be a potential warning for a Deadlock process. You have to be sure that none of this situations are present in your code:
An external component like a table lock by the database blocks the transformation.
The "Block this step until steps finish" step might run into a deadlock when there are more rows to process than the number of Rows in Rowset.
Within transformations there are situations when streams get split and joined again, so that the transformation blocks by design.
You could see full examples in the Jira Pentaho documentation page:
https://pentaho-community.atlassian.net/wiki/spaces/EAI/pages/386807182/Transformation+Deadlocks
I hope that it will help you!
I'm trying to load data from my database into an excel file of a standard template. The package is ready and it's running, throwing a couple of validation warnings stating that truncation may occur because my template has fields of a slightly smaller size than the DB columns i've matched them to.
However, no data is getting populated to my excel sheet.
No errors are reported, and when I click preview for my OLE DB source, it's showing me rows of results. None of these are getting populated into my excel sheet though.
You should first make sure that you have data coming through the pipeline. In the arrow connecting your Source task to Destination task (I'm assuming you don't have any steps between), double click and you'll open the Data Flow Path Editor. Click on Data Viewer, then Add and click OK. That will allow you to see what is moving through the pipeline.
Something to consider with Excel is that is prefers Unicode data types to Non-Unicode. Chances are you have a database collation that is Non-Unicode, so you might have to convert the values in a Data Conversion task.
ALSO, you may need to force the package to execute in 32bit runtime. The VS application develops in a 32bit environment, so the drivers you have visibility to are 32bit. If there is no 64bit equivalent, it will break when you try and run the package. Right click on your project and click Properties and under the Debug menu you'll need to change the setting Run64BitRuntime to FALSE.
you dont provide much informatiom. Add a Data View between your source and your excel destination to see if data is passing through. Do do it, just double click the data flow path, select data view and then add a grid.
Run your app. If you see data, provide more details so we can help you
Couple of questions that may lead to an answer:
Have you checked that data is actually passed through the SSIS package at run time?
Have you double checked your mapping?
Try converting within the package so you don't have the truncation issue
If you add some more details about what you're running, I may be able do give a better answer.
EDIT: Considering what you wrote in your comment, I'd defiantly try the third option. Let us know if this doesn't solve the problem.
Just as an assist for anyone else running into this - I had a similar issue and beat my head against the wall for a long time before I found out what was going on. My export WAS writing data to the file, but because I was using a template file as the destination, and that template file had previous data that had been deleted, the process was appending the data BELOW the previously used rows. So, I was writing out three lines of data, for example, but the data did not start until row 344!!!
The solution was to select the entire spreadsheet in my template file, and delete every bit of it so that I had a completely clean sheet to begin with. I then added my header lines to the clean sheet and saved it. Then I ran the data flow task and...ta-daa!!! Perfect export!
Hopefully this will help some poor soul who runs into this same issue in the future!