Text was truncated or one or more characters had no match in the target code page.
how to avoid this error practically while import data from excel
In general, this could be caused by the fact that you have data that is too long in your source file compared to the field it is going into, the data is in different formats (source to destination), or something else in general like this. However, since you mentioned that it is coming from Excel, you might want to try this solution too, which works only for Excel:
Set your package to run in 32-bit mode. Click on Project on the top menu and select Data Imports Properties. Click on Configuration Properties -> Debugging and set the Run64BitRuntime to false.
This solution sometimes works for Excel projects, but it is an outside shot. If it doesn't work, try look at the data formats and fields in the source (and what they are being imported as).
Related
When trying to add external data in excel, the data connection wizard does not load properly for some reason.
I select Data,> From Other sources, >From Data Connection Wizard>My Data source>table i want...
Then I have no options to set parameters, can only click "Finish" without any query set up.
Just Defaults to "Select * FROM XXXXX"..
Anyone have any ideas as to why this would be the case?
I have done this exact same process before, on multiple occasions, with to issue.
Something has changed to make this process not work properly.
There are big changes going on in the excel data input environment recently and all the process are being remodeled and reworked. I checked my excel 16 and it doesn't even list the option you mention (Data connection Wizard) anymore. I feel a strong push towards Data model which may not suit me or other, but all other connection methods are gradually ceasing to work.
I would guess you perhaps do not have a latest Excel version and so it just happens.
I have tried different things and I know there are two ways of moving test cases from one Jira Project to another.
1. Manually move each test case by using option "move".
2. Exporting all the test cases in CSV format and then Import it to new project.
The problem with:
1st approach is that it is time consuming as there are thousand of test cases.
2nd approach- I don't see an option of exporting test cases in csv format, I only see xml, excel and printable. And while importing test case through "Test Importer" it only accepts csv format.
Is there a better way to move/copy test cases from one JIRA project to another?
You can do bulk edit and move all issues together. Here are the general steps:
Search using a query that gives all the issues you want to move(in your case it would look something like project = MyProject AND type = Test)
Click on the tools on top right corner and select bulk edit
select all issues you want to edit and click next
Select Move Issues from listed options and click next
fill required information and Jira should start moving all selected issues
Since there is no undo option, I would recommend moving one or two issue first to see it is working as you expected before making any bulk changes.
I am when trying to use java / c# or any other programming language to modify .pbix file which generated from Microsoft Power BI. Is there any dll provided by POWER BI or how can i read the content through program. I just want to get and update the datasource directory. Please help.
Thanks.
I don't think it's possible, and even if it is, the solution is likely inelegant.
Even if you managed to do this, you would need to open your PBIX file in the PBI Desktop to refresh your data.
Are you doing this because you have many queries and it's inconvenient to change data source string (folder name) of all of them? There is a way to keep your connection string in a single variable as described here.
I don't know your exact setup, but looking at your question, lets say you have sets of files in different folders and you want to change the folder in one step.
To use the approach from the link above but with file input, you need to do the following:
If it's a new report, import your files as usual
Create new query: "New Source"->"Blank Query"
You will see "Query1" and an empty text box, enter the folder name, for example "C:\". Rename this query to "Folder".
Go to your imported file in the query editor, "test1" in my example. In query settings on the right, select source.
Change the filename by substituting the folder with your "Folder" query, for example:
...File.Contents("C:\test1.csv"),...
...File.Contents(Folder & "test1.csv"),...
Repeat for all imported files, then "Close & Apply".
Now whenever you need to change the folder with your files, edit your "Folder" value and "Refresh".
I have a data set with multiple tables. In one of these tables I have included some scalar queries that take various fields of the table and spit out a single result (for instance average of fields X, Y, and Z), etc. Up to now, I have had great success with this, but now I am getting a very odd issue cropping up.
When I try to add a new scalar query, I am putting my SQL in the screen and naming my query, just like I normally do. However, whenever I do this now, it creates a duplicate of the DataSet.Designer file (now DataSet*1*.Designer), and I start to get compiler errors since all functions within the partial classes are duplicated. I am only able to back out of this by deleting the new designer file, in which case my new SQL query is now unavailable (but I still see it in the original designer view).
I am not sure why this is happening. Can anyone shed any light on why the IDE is creating a new DataSet.Designer file instead of modifying the original?
Discovered the answer. It looks like this may happen if some process is using the original designer file, and the IDE tries to generate a new one. Unfortunately, it doesn't reconcile that the old one is still there. This will correct the issue.
Delete the newest (offending) designer file from your project
Close the project.
Open the vbproj file using a text editor.
Search for the following..
<LastGenOutput>myDataSet1.Designer.cs<LastGenOutput>
Take the 1 off of the dataset name
save the file and reopen your project.
I'm trying to load data from my database into an excel file of a standard template. The package is ready and it's running, throwing a couple of validation warnings stating that truncation may occur because my template has fields of a slightly smaller size than the DB columns i've matched them to.
However, no data is getting populated to my excel sheet.
No errors are reported, and when I click preview for my OLE DB source, it's showing me rows of results. None of these are getting populated into my excel sheet though.
You should first make sure that you have data coming through the pipeline. In the arrow connecting your Source task to Destination task (I'm assuming you don't have any steps between), double click and you'll open the Data Flow Path Editor. Click on Data Viewer, then Add and click OK. That will allow you to see what is moving through the pipeline.
Something to consider with Excel is that is prefers Unicode data types to Non-Unicode. Chances are you have a database collation that is Non-Unicode, so you might have to convert the values in a Data Conversion task.
ALSO, you may need to force the package to execute in 32bit runtime. The VS application develops in a 32bit environment, so the drivers you have visibility to are 32bit. If there is no 64bit equivalent, it will break when you try and run the package. Right click on your project and click Properties and under the Debug menu you'll need to change the setting Run64BitRuntime to FALSE.
you dont provide much informatiom. Add a Data View between your source and your excel destination to see if data is passing through. Do do it, just double click the data flow path, select data view and then add a grid.
Run your app. If you see data, provide more details so we can help you
Couple of questions that may lead to an answer:
Have you checked that data is actually passed through the SSIS package at run time?
Have you double checked your mapping?
Try converting within the package so you don't have the truncation issue
If you add some more details about what you're running, I may be able do give a better answer.
EDIT: Considering what you wrote in your comment, I'd defiantly try the third option. Let us know if this doesn't solve the problem.
Just as an assist for anyone else running into this - I had a similar issue and beat my head against the wall for a long time before I found out what was going on. My export WAS writing data to the file, but because I was using a template file as the destination, and that template file had previous data that had been deleted, the process was appending the data BELOW the previously used rows. So, I was writing out three lines of data, for example, but the data did not start until row 344!!!
The solution was to select the entire spreadsheet in my template file, and delete every bit of it so that I had a completely clean sheet to begin with. I then added my header lines to the clean sheet and saved it. Then I ran the data flow task and...ta-daa!!! Perfect export!
Hopefully this will help some poor soul who runs into this same issue in the future!