Best way to handle multi-valued fields as a view/grid - dojo

In several notes applications, instead of handling related data as separate documents, if the size of the data is small (less than the 32k limit), I'll make several multi valued fields and display it in what I call a "List Panel". It's a table where each column displays one multi-value field. Since fielda(1) goes with fieldb(1) that goes with fieldc(1) there is a concept of rows. (I did a similar thing in my auditing routine discussed here )
It is always assumed that each field has exactly the same number of elements.
All the multi-value fields are then stored on the single document. This avoids several coding conventions that made my eyes bleed like having date changed, who changed it, new value fields for each field we wanted to audit. Another thing that this kept to a minimum was having to provide multiple fields for the same thing that locked you into a limit. Taxrate1, Taxrate2, Taxrate3, etc...
In my "Listpanel" the first column is a vertical checkbox. (One for each element in my lists) This is so I can select one item to bring up and edit, or select multiple values to delete "rows" or apply some kind of mass change to them.
What would be the best way to handle this under xPages to get this functionality? I tried making a table but am having the devil of a time to get the checkboxes to line up with their corresponding data items.
Views and dojo-grids seem to assume we're using a document for each row.....

This TableWalker may provide what you want http://www-10.lotus.com/ldd/ddwiki.nsf/dx/Tutorial-Introduction-to-XPages-Exercise-23
It was created when XPages was all very new, so it's SSJS rather than Java. But if you're comfortable wiith Java, converting it probably won't be a challenge.

You could use a repeat control to display the values and build a table using the table row tags in the repeat. You would want to calculate the id of the checkbox to be able to take an action on that selected row. The repeat var would be just one of your multi-value fields and you use the index of the repeat to get the value for that row from the other multi-value fields.

Related

PowerApps filter returning incomplete data record...?

I have an Azure SQL database, and my records inside table Spiderfood_RITMData in that database includes 13 different fields. Lots of stuff. I have confirmed in SQL-SMS that the records have data in each field.
There are way more items in the database than PowerApps can see using LOOKUP (1600-9000 records or more). However, I know FOR A FACT that there is only ONE record that has any given value in the NUMBER column. It's not a primary key, but it is unique in the table.
In PowerApps, I am trying to pull that field so that I can eventually parse out the individual items.
So, the commands I'm trying are:
ClearCollect(MLE_test1, Filter('Spiderfood_RITMData', "RITM2170467" in Number));
ClearCollect(MLE_test2, Search('Spiderfood_RITMData',"RITM2170467", "Number"));
However, the Collection results for MLE_test1 and MLE_test2 both are empty EXCEPT for the value of NUMBER. Say what?!
I'm trying to use the examples posted on https://learn.microsoft.com/en-us/powerapps/maker/canvas-apps/functions/function-filter-lookup but I am honestly getting baffled by this.
How should I be formatting this call such that I can pull the whole record?
Big picture explanation: I need to do a lot of data LOOKUPS into my table Spiderfood_RITMData table, but it has way more than 2000 rows, and PowerApps will not perform the Lookup correctly. So my presumably smart idea is to create a MUCH SMALLER "version" of Spiderfood_RITMData as a local collection, using a more delegateable function (such as FILTER or IN). If I filter by all records containing the values of NUMBER, then I go from, say a 10,000-record SQL table to a 10-record Collection. And I can do LOOKUPS against that collection for the rest of the function (uh, I think -- I'm still trying to experiment accordingly). Please let me know if this is crazy or not.
LookUp is just used to get one record, instead try this:
ClearCollect(MLE_test1, Filter('Spiderfood_RITMData', "RITM2170467" = Number));
This gets a collection with all the items where Number is = to "RITM2170467"
Collections are limited to only 2000 records in each collections.
I had same issue. Go to App settings. Under Upcoming Features make sure Explicit column selection is turned off. Hope this does it for you.

Dynamically creating a pivot table using fuzzy matching

So, I'm constantly being given data in new and different formats. I'm on a crusade to get my work to standardize data for easy use, and if I managed to convince the powers that be to standardize data, this problem becomes entirely moot. Until then, I have the following problem:
I get data in a variety of ways. Sometimes my gross sales are called total sales. Sometimes gross sales before discounts, total sales before discounts, Gross_Sales, etc. Discounts, deductions, exempt amounts, etc. form another column. So on and so forth. I'd like to be able to do the following:
1) Figure out what columns I want,
2) Turn those columns into a pivot table.
For part 1, I have two options, and I'm wondering if there's anymore: The 1st is to use Microsoft's fuzzy-matching add-in to help me match. I'd have a separate tab dedicated to fuzzy matching each column I need. The second is to just generate a long list of all the variants, and to test each one until I find a hit, assign it, and move onto testing the next one.
The second part is turning all of this into a pivot table - the resouces I have so far are https://www.thespreadsheetguru.com/blog/2014/9/27/vba-guide-excel-pivot-tables and How to Create a Pivot Table in VBA
Is there a better method? Is there another way?
Edit: Slightly better method - Grab the data columns, place them into a table, and pivot everything off of that table - it removes the need to re-create pivot tables, just need to move the data over.
Having the same problem, I use a mix of your two methods.
My data consists of a bunch of logs for rejected x-ray images, and the reject reason is a free text field. My solution was to create a table where the first column contains my desired output categories, and then each subsequent column contains a different variation of it.
For example, a row might have (column one/ouput first entry):
Positioning, POS, Positioning Error, Patient Positioning
Note that these are all fairly different from each other. Where the fuzzy matching comes in - it is used to capture all the smaller differences and mispellings around those other columns. When the fuzzy matching section decides a given reason matches a column's entry, it is then replaced with the appropriate desired output reason from column 1 of the table. In my example, a reason of 'Possitioning Err' [sic] would match to column 3 (Positioning Error) and then get converted to Positioning.
Then wash rinse repeat over the rest of your data as needed. This approach was super useful and fairly flexible in helping standardize my data. It was also computationally more expensive, but you'd only need to run the matching portion once I guess.
As for the actual mechanics of going about doing this - I use 2010, so no inbuilt functionality. I run the fuzzy matching code on a temporary worksheet until best percentage matches are found, and then overwrite the actual source data afterwards.

RSA - Archer - Hide multiple fields in a search result

When performing a search in Archer, the result contains some un-necessary fields. Is there a way to show only fields that belong to Application "General information" ? In other words, there are some fields that are suggested to being added to "General Information" tab which appear in a search result.
I know I can disable these fields by selecting the field, then clicking on "modify field properties", "options" and untick the show field in a search result.
Since I have multiple applications and a lot of fields, doing this will take a lot of time. Is there any script or trick to hide all these fields in a search result?
You could always use the API and create your own search XML to contain only the fields you want to see.
But short of doing that, I think you've hit the nail on the head. You'll need to go into each application and then modify each of the fields you don't want shown. I don't think it's a waste of time, as you're surely not changing this search result every time... it's a one time deal and if it makes it easier for you or the end users, it's certainly worth it IMO.
If you have access to the database then you can modify table "fielddef" and make fields not searchable by default. You can join "fielddef" table with "level" and "moduletranslation" and target only specific modules this way.
I don't have the SQL code for this since I didn't have to make a bulk update to the fields and make them not searchable.
It should take 5 min to join these 3 tables and make an update if you are good with SQL.

Is there a (creative) way to hide a text field in Indesign if there is no information in the data merge field?

I am creating a data-merge document in InDesign.
There are various tables that I've created which only show as many rows as there is actual data in the field, through some creative table and cell styles.
Now I've been asked to only have an entirely separate table only show if there is information in any of those fields.
I'm at a total loss. With the way the current structure is set up, I can cause it to not display any text, but it still shows empty header cells and one line of empty row cells.
Pre-DataMerge, with the data fields
Post-Datamerge, with the resulting empty cells
Any creative ideas to hide that table? I was thinking there might be a way to hide the entire text field, if not the table. Maybe a script? I tried one that deletes blank tables, but that didn't seem to work after the data-merge was run.
I am not sure you can get that level of processing with InDesign datamerge. You could think of a script to post remove those tables or use a dedicated plugin such as Easycatalog that can take care of such empty items natively.

Access 2010 Database Clenup

I have problems with my records within my database, so I have a template with about 260,000 records and for each record they have 3 identification columns to determine what time period the record is from and location: one for year, one for month, and one for region. Then the information for identifying the specific item is TagName, and Description. The Problem I am having is when someone entered data into this database they entered different description for the same device, I know this because the tag name is the same. Can I write code that will go through the data base find the items with the same tag name and use one of the descriptions to replace the ones that are different to have a more uniform database. Also some devices do not have tag names so we would want to avoid the "" Case.
Also moving forward into the future I have added more columns to the database to allow for more information to be retrieved, is there a way that I can back fill the data to older records once I know that they have the same tag name and Description once the database is cleaned up? Thanks in advance for the information it is much appreciated.
I assume that this will have to be done with VBA of some sort to modify records by looking for the first record with that description and using a variable to assign that description to all the other items with the same tag name? I just am not sure of the correct VBA syntax to go about this. I assume a similar method would be used for the backfilling process?
Your question is rather broad and multifaceted, so I'll answer key parts in steps:
The Problem I am having is when someone entered data into this
database they entered different description for the same device, I
know this because the tag name is the same.
While you could fix up those inconsistencies easily enough with a bit of SQL code, it would be better to avoid those inconsistencies being possible in the first place:
Create a new table, let's call it 'Tags', with TagName and TagDescription fields, and with TagName set as the primary key. Ensure both fields have their Required setting to True and Allow Zero Length to False.
Populate this new table with all possible tags - you can do this with a one-off 'append query' in Access jargon (INSERT INTO statement in SQL).
Delete the tag description column from the main table.
Go into the Relationships view and add a one-to-many relation between the two tables, linking the TagName field in the main table to the TagName field in the Tags table.
As required, create a query that aggregates data from the two tables.
Also some devices do not have tag names so we would want to avoid the
"" Case.
In Access, the concept of an empty string ("") is different from the concept of a true blank or 'null'. As such, it would be a good idea to replace all empty strings (if there are any) with nulls -
UPDATE MyTable SET TagName = Null WHERE TagName = '';
You can then set the TagName field's Allow Zero Length property to False in the table designer.
Also moving forward into the future I have added more columns to the
database to allow for more information to be retrieved
Think less in terms of more columns than more tables.
I assume that this will have to be done with VBA of some sort to modify records
Either VBA, SQL, or the Access query designers (which create SQL code behind the scenes). In terms of being able to crunch through data the quickest, SQL is best, though pure VBA (and in particular, using the DAO object library) can be easier to understand and follow.