I am currently working on an automation to copy a row to an excel table from Smartsheet, then extract that table data to a SharePoint list (thanks Microsoft :( ). However I'm having an issues filtering my automation to the new row that was added or if get it to work the automation gives an error. The flow below (without using the output from the filter array step) works but copies all the rows in the sheet and creates a table, which is not feasible.
Final Step:
If i add the filtered array then i get the following error : A value must be provided for item. clientRequestId xxxxxxxx-xxxxxx-xxxx
How can I have only the current/New row added to my table?
I finally figured it out. I shared the updated flows below.
Related
In company when I currently work we have big issue with MS excel stability, hence my question below.
Recently I just learned about excel Dictonary code.
I know pivot tables and how they work.
sadly the issue is not with pivot themselves but with excel. (IT dept. is working on the issue for 3 weeks now, and we dont know when / if they gonna fix it)
Hence my big ask for thi community:
I would need a userform working with the dictonary.
What I would need is to create a code that could work as pivot table but using dictonaries (since theoreticaly they are faster and are outside of VBA / excel basic in-build option)
so?
Can some1 help in creating such code?
Is this the right option?
I would like to see a userform where I can choose my Table(ctrl+T) headers to choose by which header i want to sum values up, and ofc I would have to be able to choose a column by which the dictonary summing is working on.
Thank you both for answering.
Lets start then.
I watched ExcelMacroMastery videos regarding dictionaries,
In this example, he used them to make a basic sum exactly like the basic functionality of Pivot table.
So since that's the basic use where I work I wish to have a dictionary macro from which I can choose the column by which I get unique values and 2nd column with a sum from the second provided column from a table.
the issue is: if I show any file or any example this could result in macro working for this specific case, and I would like it to be able to choose by Table's (CTRL+T) headers for the unique values and to use some way(like a dropdown menu) to choose the column by which the sum can happen.
This instability is due to 32 bit office suite 365 working n 64 bi PCs/ laptops and recent company update made it even worse, now there is an issue with even basic save file option.
Not to mention excel crashing for no apparent reason.
So to sum up,
I need dictionaries to kinda step up and replace basic summarizing functionality of an pivot table.
or to replace this non pivot way:
use unique function to determine unique values from specified column (non-table object sadly)
Use sumif or sumoifs function to summarize the specified amount/value for that unique list.
//EDIT:
I kinda found what I was looking for thus the edit.
Im showing the link to the file I wish to change a bit:
https://app.monstercampaigns.com/c/s0iavndiopijkrar8ghp/
to this file I wish to add a user form by which the headers of the report will be chosen from source data, and by which the sum will occur.
I am using VS2019 for SSAS Tabular Model development. Have imported a table from a CSV. The source CSV has undergone a change(new column has been added). When I process my table in VS2019, it gets processed successfully. However I am unable to see the new column introduced in source CSV. I went to Table properties and did a Refresh Preview but was not able to see the new column. Closed and re-started solution, re-processed the table but no luck! I remember in VS2017 we used to add the column by going into table properties and selecting the new column but things seem to be different in VS2019. Any help would be appreciated.
I'm assuming you used Get Data / Power Query to import the CSV. This unfortunately generates a Power Query Csv.Document function call that includes the number of columns when the query was generated. This parameter isn't exposed through the usual Power Query UI.
If you use the Advanced Editor or turn on the Formula Bar (view menu), you will see a parameter like Columns=10, was generated, usually in your Source step.
It currently seems safe to delete that parameter by editing the code - it will then always pull back all columns presented. Or if you prefer, you can edit the number of columns, as described in this blog post:
https://prathy.com/2016/08/how-to-add-extra-columns-to-an-existing-power-bi-file-which-using-csv-data-source/
I have just started learning pentaho spoon steps and have one problem with solving one problem. I need to transform the data from xls-file and convert it do database. The problem is that my input file looks like this: table-description
And I can not find how to solve two problems:
For my next step I need to save not only the table itself (Range A8:D11), but also the date (cell A5). When I am trying to do it in pentaho with Microsoft Excel Input – Step it works only when I select A8-cell as a start row, but the date is not saved.
In Microsoft Excel Input – Step I must always select a start row in order to generate a table and use it in next steps. And I must do it manually, I mean to say that my table starts from A8-cell. In my case I can not always say for sure that the table starts from A8-cell. I know, that the start-cell is that cell, which is in A-Column and has value = “Date”. Microsoft Excel Input – Step will be first step in my kettle because I must get data and change them. That is why I think I can not use before Java Script.
I have not found the solution to these two problems and I do not know if it is possible to make it. I will be grateful for any help.
I am not sure what do you mean by converting an excel file to database but If you can convert the xls into csv and read that file then you know from which row you need to filter the data. Basically you can use a simple filter step to filter the data when it matches column name. I hope this will help.
Use two Microsoft Excel Input steps. One step reads the table (A8:D11). The other step reads the date (A5). Then merge the two streams, for example using a Join Rows (cartesian product) step
Read everything. Then use a Javascript step with two script tabs. For one of the tabs: Right-click and choose Set start script. Code : var start = 0; The other tab should be kept as a transformation script. Pseudocode: if(FieldA equals "Date") {start = 1;}. Now you will have an additional field in the stream called start. If start equals 0, then you know that your tabular data hasn't started yet, and you can filter out the row.
I have multiple web query tables in an Excel spreadsheet and I can refer to them in vba using QueryTable(1), QueryTable(2), etc.
I have some vba code that is used to refresh one of these tables, please see below. However, the problem is that every time I create a new web query table, the table's QueryTable number changes (i.e., n keeps changing for QueryTable(n) for a selected table). Is there a better way to refer to a specific web query table other than referring it as QueryTable(1)? Thanks.
main_workbook.Worksheets("Input ID").QueryTables(1).Refresh (False)
Maybe...
For i = 1 To main_workbook.Worksheets("Input ID").QueryTables.Count Step 1
main_workbook.Worksheets("Input ID").QueryTables(i).Refresh (False)
Next i
I am attempting to upload an Excel document into Access. I have used VBA to unhide all columns and rows and then delete columns and rows that are not being used. All of the worksheets upload into Access properly except one. This particular worksheet attempts to upload a field and label it Field 12. I am unable to find a way to delete this field. Any help?
It is probably the first column after your data...
Try either in VBA or in Excel deleting the columns to the right of your data (not just contents but an actual delete). I've found this typically happens when the columns to the right of your data contained data at one point and Access / Excel sees those as still containing data. Then try your import again.
Alternatively, you could upload into a new Access staging table before pulling your desired known columns into the final table through an INSERT query. Then you can delete the staging table if you like or delete it before the next import. In this way, each import can have its own "added columns".