Reverse Cartesian Product - sql

Given the data set below:
a | b | c | d
1 | 3 | 7 | 11
1 | 5 | 7 | 11
1 | 3 | 8 | 11
1 | 5 | 8 | 11
1 | 6 | 8 | 11
Perform a reverse Cartesian product to get:
a | b | c | d
1 | 3,5 | 7,8 | 11
1 | 6 | 8 | 11
I am currently working with scala, and my input/output data type is currently:
ListBuffer[Array[Array[Int]]]
I have come up with a solution (seen below), but feel it could be optimized. I am open to optimizations of my approach, and completely new approaches. Solutions in scala and c# are preferred.
I am also curious if this could be done in MS SQL.
My current solution:
def main(args: Array[String]): Unit = {
// Input
val data = ListBuffer(Array(Array(1), Array(3), Array(7), Array(11)),
Array(Array(1), Array(5), Array(7), Array(11)),
Array(Array(1), Array(3), Array(8), Array(11)),
Array(Array(1), Array(5), Array(8), Array(11)),
Array(Array(1), Array(6), Array(8), Array(11)))
reverseCartesianProduct(data)
}
def reverseCartesianProduct(input: ListBuffer[Array[Array[Int]]]): ListBuffer[Array[Array[Int]]] = {
val startIndex = input(0).size - 1
var results:ListBuffer[Array[Array[Int]]] = input
for (i <- startIndex to 0 by -1) {
results = groupForward(results, i, startIndex)
}
results
}
def groupForward(input: ListBuffer[Array[Array[Int]]], groupingIndex: Int, startIndex: Int): ListBuffer[Array[Array[Int]]] = {
if (startIndex < 0) {
val reduced = input.reduce((a, b) => {
mergeRows(a, b)
})
return ListBuffer(reduced)
}
val grouped = if (startIndex == groupingIndex) {
Map(0 -> input)
}
else {
groupOnIndex(input, startIndex)
}
val results = grouped.flatMap{
case (index, values: ListBuffer[Array[Array[Int]]]) =>
groupForward(values, groupingIndex, startIndex - 1)
}
results.to[ListBuffer]
}
def groupOnIndex(list: ListBuffer[Array[Array[Int]]], index: Int): Map[Int, ListBuffer[Array[Array[Int]]]] = {
var results = Map[Int, ListBuffer[Array[Array[Int]]]]()
list.foreach(a => {
val key = a(index).toList.hashCode()
if (!results.contains(key)) {
results += (key -> ListBuffer[Array[Array[Int]]]())
}
results(key) += a
})
results
}
def mergeRows(a: Array[Array[Int]], b: Array[Array[Int]]): Array[Array[Int]] = {
val zipped = a.zip(b)
val merged = zipped.map{ case (array1: Array[Int], array2: Array[Int]) =>
val m = array1 ++ array2
quickSort(m)
m.distinct
.array
}
merged
}
The way this works is:
Loop over columns, from right to left (the groupingIndex specifies which column to run on. This column is the only one which does not have to have values equal to each other in order to merge the rows.)
Recursively group the data on all other columns (not groupingIndex).
After grouping all columns, it is assumed that the data in each group have equivalent values in every column except for the grouping column.
Merge the rows with the matching columns. Take the distinct values for each column and sort each one.
I apologize if some of this does not make sense, my brain is not functioning today.

Here is my take on this. Code is in Java but could easily be converted into Scala or C#.
I run groupingBy on all combinations of n-1 and go with the one that has the lowest count, meaning largest merge depth, so this is kind of a greedy approach. However it is not guaranteed that you will find the optimal solution, meaning minimize the number k which is np-hard to do, see link here for an explanation, but you will find a solution that is valid and do it rather fast.
Full example here: https://github.com/jbilander/ReverseCartesianProduct/tree/master/src
Main.java
import java.util.*;
import java.util.stream.Collectors;
public class Main {
public static void main(String[] args) {
List<List<Integer>> data = List.of(List.of(1, 3, 7, 11), List.of(1, 5, 7, 11), List.of(1, 3, 8, 11), List.of(1, 5, 8, 11), List.of(1, 6, 8, 11));
boolean done = false;
int rowLength = data.get(0).size(); //4
List<Table> tables = new ArrayList<>();
// load data into table
for (List<Integer> integerList : data) {
Table table = new Table(rowLength);
tables.add(table);
for (int i = 0; i < integerList.size(); i++) {
table.getMap().get(i + 1).add(integerList.get(i));
}
}
// keep track of count, needed so we know when to stop iterating
int numberOfRecords = tables.size();
// start algorithm
while (!done) {
Collection<List<Table>> result = getMinimumGroupByResult(tables, rowLength);
if (result.size() < numberOfRecords) {
tables.clear();
for (List<Table> tableList : result) {
Table t = new Table(rowLength);
tables.add(t);
for (Table table : tableList) {
for (int i = 1; i <= rowLength; i++) {
t.getMap().get(i).addAll(table.getMap().get(i));
}
}
}
numberOfRecords = tables.size();
} else {
done = true;
}
}
tables.forEach(System.out::println);
}
private static Collection<List<Table>> getMinimumGroupByResult(List<Table> tables, int rowLength) {
Collection<List<Table>> result = null;
int min = Integer.MAX_VALUE;
for (List<Integer> keyCombination : getKeyCombinations(rowLength)) {
switch (rowLength) {
case 4: {
Map<Tuple3<TreeSet<Integer>, TreeSet<Integer>, TreeSet<Integer>>, List<Table>> map =
tables.stream().collect(Collectors.groupingBy(t -> new Tuple3<>(
t.getMap().get(keyCombination.get(0)),
t.getMap().get(keyCombination.get(1)),
t.getMap().get(keyCombination.get(2))
)));
if (map.size() < min) {
min = map.size();
result = map.values();
}
}
break;
case 5: {
//TODO: Handle n = 5
}
break;
case 6: {
//TODO: Handle n = 6
}
break;
}
}
return result;
}
private static List<List<Integer>> getKeyCombinations(int rowLength) {
switch (rowLength) {
case 4:
return List.of(List.of(1, 2, 3), List.of(1, 2, 4), List.of(2, 3, 4), List.of(1, 3, 4));
//TODO: handle n = 5, n = 6, etc...
}
return List.of(List.of());
}
}
Output of tables.forEach(System.out::println)
Table{1=[1], 2=[3, 5, 6], 3=[8], 4=[11]}
Table{1=[1], 2=[3, 5], 3=[7], 4=[11]}
or rewritten for readability:
a | b | c | d
--|-------|---|---
1 | 3,5,6 | 8 | 11
1 | 3,5 | 7 | 11
If you were to do all this in sql (mysql) you could possibly use group_concat(), I think MS SQL has something similar here: simulating-group-concat or STRING_AGG if SQL Server 2017, but I think you would have to work with text columns which is a bit nasty in this case:
e.g.
create table my_table (A varchar(50) not null, B varchar(50) not null,
C varchar(50) not null, D varchar(50) not null);
insert into my_table values ('1','3,5','4,15','11'), ('1','3,5','3,10','11');
select A, B, group_concat(C order by C) as C, D from my_table group by A, B, D;
Would give the result below, so you would have to parse and sort and update the comma separated result for any next merge iteration (group by) to be correct.
['1', '3,5', '3,10,4,15', '11']

Related

Get index of each root on level wise in tree data structure

Hey I am working on tree data structure. I want to know can we get index of each node in level wise. I below diagram represent how I want the value + index. Level A or B represent node value and index value represent index value
Node
| | |
Level A -> 1 2 3
index value-> 0 1 2
| | | | | |
| | | | | |
Leve B-> 4 5 6 7 8 9
index value-> 0 1 2 3 4 5
....// more level
How can we achieved index in each level wise. I am adding my logic how I am adding value in each level wise. Could you someone suggest how can I achieve this?
var baseNode: LevelIndex = LevelIndex()
var defaultId = "1234"
fun main() {
val list = getUnSortedDataListForLevel()
val tempHashMap: MutableMap<String, LevelIndex> = mutableMapOf()
list.forEach { levelClass ->
levelClass.levelA?.let { levelA ->
val levelOneTempHashMapNode = tempHashMap["level_a${levelA}"]
if (levelOneTempHashMapNode != null) {
if (defaultId == levelClass.id && levelOneTempHashMapNode is LevelOne) {
levelOneTempHashMapNode.defaultValue = true
}
return#let
}
val tempNode = LevelOne().apply {
value = levelA
if (defaultId == levelClass.id) {
defaultValue = true
}
}
baseNode.children.add(tempNode)
tempHashMap["level_a${levelA}"] = tempNode
}
levelClass.levelB?.let { levelB ->
val levelTwoTempHashMapNode = tempHashMap["level_a${levelClass.levelA}_level_b${levelB}"]
if (levelTwoTempHashMapNode != null) {
if (defaultId == levelClass.id && levelOneTempHashMapNode is LevelTwo) {
levelTwoTempHashMapNode.defaultValue = true
}
return#let
}
val tempNode = LevelTwo().apply {
value = levelB
if (defaultId == levelClass.id) {
defaultValue = true
}
}
val parent =
tempHashMap["level_a${levelClass.levelA}"] ?: baseNode
parent.children.add(tempNode)
tempHashMap["level_a${levelClass.levelA}_level_b${levelB}"] =
tempNode
}
levelClass.levelC?.let { levelC ->
val tempNode = LevelThree().apply {
value = levelC
if (defaultId == levelClass.id) {
defaultValue = true
}
}
val parent =
tempHashMap["level_a${levelClass.levelA}_level_b${levelClass.levelB}"]
?: baseNode
parent.children.add(tempNode)
}
}
}
open class LevelIndex(
var value: String? = null,
var children: MutableList<LevelIndex> = arrayListOf()
)
class LevelOne : LevelIndex() {
var defaultValue: Boolean? = false
}
class LevelTwo : LevelIndex() {
var defaultValue: Boolean? = false
}
class LevelThree : LevelIndex() {
var defaultValue: Boolean = false
}
UPDATE
I want index value by root level because, I have one id, I want to match that combination with that id, if that value is present then I am storing that value b true, and need to find that index value.
Node
| | |
Level A -> 1 2 3
index value-> 0 1 2
default value-> false true false
| | | | | |
| | | | | |
Leve B-> 4 5 6 7 8 9
index value-> 0 1 2 3 4 5
default value->false false true false false false
....// more level
So, Level A I'll get index 1.
For Level B I'll get index 2
I'd create a list to put the nodes at each level in order. You can recursively collect them from your tree.
val nodesByLevel = List(3) { mutableListOf<LevelIndex>() }
fun collectNodes(parent: LevelIndex) {
for (child in parent.children) {
val listIndex = when (child) {
is LevelOne -> 0
is LevelTwo -> 1
is LevelThree -> 2
// I made LevelIndex a sealed class. Otherwise you would need an else branch here.
}
nodesByLevel[listIndex] += child
collectNodes(child)
}
}
collectNodes(baseNode)
Now nodesByLevel contains three lists containing all the nodes in each layer in order.
If you just need the String values, you could change that mutableList to use a String type and use += child.value ?: "" instead, although I would make value non-nullable (so you don't need ?: ""), because what use is a node with no value?
Edit
I would move defaultValue up into the parent class so you don't have to cast the nodes to be able to read it. And I'm going to treat is as non-nullable.
sealed class LevelIndex(
var value: String = "",
val children: MutableList<LevelIndex> = arrayListOf()
var isDefault: Boolean = false
)
Then if you want to do something with the items based on their indices:
for ((layerNumber, layerList) in nodesByLevel.withIndex()) {
for((nodeIndexInLayer, node) in layerList) {
val selectedIndexForThisLayer = TODO() //with layerNumber
node.isDefault = nodeIndexInLayer == selectedIndexForThisLayer
}
}

problems with index of array

I'm writing a function that allows you to remove certain numbers from an int arraylist.
My code
for (i in 1 until 4) {
divider = setDivider(i)
for(index in 0 until numbers.size){
if(index <= numbers.size){
if (numbers[index] % divider == 0 && !isDone) {
numbers.removeAt(index)
}
}else{
isDone = true
}
}
if(isDone)
break
}
the function to set the divider
fun setDivider(divider: Int): Int {
when (divider) {
1 -> return 2
2 -> return 3
3 -> return 5
4 -> return 7
}
return 8
}
I do not know why the ide is giving me the error Index 9 out of bounds for length 9.
Author explained in the comments that the goal is to remove all numbers that are divisible by 2, 3, 5 and 7.
It can be achieved much easier by utilizing ready to use functions from stdlib:
val dividers = listOf(2, 3, 5, 7)
numbers.removeAll { num ->
dividers.any { num % it == 0 }
}
It removes elements that satisfy the provided condition (is divisible) for any of provided dividers.
Also, it is often cleaner to not modify a collection in-place, but to create an entirely new collection:
val numbers2 = numbers.filterNot { num ->
dividers.any { num % it == 0 }
}

How to write Kotlin code to store all odd numbers starting at 7 till 101 and print sum of them?

How to write Kotlin code to store all odd numbers starting at 7 till 101 and print the sum of them?
My code goes like this:
var sum:Int = 0
var num:Int? = null
for(num in 7..101 )
if(num % 2 != 0)
print("$num ")
var result = sum + num
num++
println("$result")
Simply filter the range 7..101 and sum the items:
val total = (7..101).filter { it % 2 == 1 }.sum()
println(total)
Or use sumBy():
val total = (7..101).sumBy { if (it % 2 == 1) it else 0}
println(total)
Or first create a list of all the odd numbers and then get the sum:
val list = (7..101).filter { it % 2 == 1 }
val total = list.sum()
println(total)
If you need to store them, just create a MutableList and add the odd numbers during the forEach execution
var oddNumbersTotal = 0
(7..101).forEach { n ->
if (n % 2 != 0) {
oddNumbersTotal += n
}
}
println(oddNumbersTotal)
You can also try this:
val array = IntArray(48){2*it+1}
print(array.sum())

Communication channel error when running BigQuery udf

I recently posted on stack overflow about an OOM error while running many udfs on a single table (bigquery udf out of memory issues). This error seems to have been partially fixed, however, I am running into a new error when running a udf on a 10,000 row table. Here is the error message:
Error: An error occurred while communicating with a subprocess. message: "Communication channel error during 4 command" sandbox_process_error { }
Error Location: User-defined function
Job ID: broad-cga-het:bquijob_32bc01d_1569f11b8a2
The error does not occur when I remove the emit statement in the udf, so the error must be occurring when the udf tries to write back to a different table.
Here is a copy of the udf itself:
bigquery.defineFunction(
'permute',
['obj_nums','num_obj_per_indiv','row_number'], // Names of input columns
[{"name": "num_cooccurrences_list","type": "string","mode":"nullable"}], // Output schema
permute
);
function permute(row, emit) {
var obj_ids = row['obj_nums'].split(",").map(function (x) {
return parseInt(x, 10);
});
var num_obj_per_indiv = row['num_obj_per_indiv'].split(",").map(function (x) {
return parseInt(x, 10);
});
var row_number = row['row_number']
// randomly shuffle objs using Durstenfeld shuffle algorithm
obj_ids = shuffle_objs(obj_ids);
// form dictionary of obj_pairs from obj_ids
var perm_run_obj_set = new Set(obj_ids);
var perm_run_obj_unique = Array.from(perm_run_obj_set);
perm_run_obj_unique.sort();
var perm_run_obj_pairs_dict = {};
output = {}
for (var i = 0; i < perm_run_obj_unique.length - 1; i++) {
for (var j = i + 1; j < perm_run_obj_unique.length; j++) {
var obj_pair = [perm_run_obj_unique[i],perm_run_obj_unique[j]].sort().join("_")
perm_run_obj_pairs_dict[obj_pair] = 0
}
}
// use fixed number of objs per indiv and draw from shuffled objs
var perm_cooccur_dict = {};
//num_obj_per_indiv = num_obj_per_indiv.slice(0,3);
for(var index in num_obj_per_indiv) {
var obj_count = num_obj_per_indiv[index]
var perm_run_objs = [];
for(var j = 0; j < obj_count; j++) {
perm_run_objs.push(obj_ids.pop());
}
perm_run_objs = new Set(perm_run_objs);
perm_run_objs = Array.from(perm_run_objs)
while(perm_run_objs.length > 1) {
current_obj = perm_run_objs.pop()
for(var pair_obj_ind in perm_run_objs) {
var pair_obj = perm_run_objs[pair_obj_ind]
var sorted_pair = [current_obj,pair_obj].sort().join("_")
perm_run_obj_pairs_dict[sorted_pair] += 1
// console.log({"obj_pair":[current_obj,pair_obj].sort().join("_"),"perm_run_id":row_number})
// emit({"obj_pair":[current_obj,pair_obj].sort().join("_"),"perm_run_id":row_number});
}
}
}
// emit({"obj_pair":[current_obj,pair_obj].sort().join("_"),"perm_run_id":row_number});
// form output dictionary
num_cooccur_output = ""
for (var obj_pair in perm_run_obj_pairs_dict) {
//emit({"obj_pair":obj_pair,"num_cooccur":perm_run_obj_pairs_dict[obj_pair]});
num_cooccur_output += String(perm_run_obj_pairs_dict[obj_pair])
num_cooccur_output += ","
}
num_cooccur_output = num_cooccur_output.substring(0, num_cooccur_output.length - 1);
emit({"num_cooccurrences_list":num_cooccur_output});
}
/**
* Randomize array element order in-place.
* Using Durstenfeld shuffle algorithm.
*/
function shuffle_objs(obj_array) {
for (var i = obj_array.length - 1; i > 0; i--) {
var j = Math.floor(Math.random() * (i + 1));
var temp = obj_array[i];
obj_array[i] = obj_array[j];
obj_array[j] = temp;
}
return obj_array;
}
Any help would be greatly appreciated!
Thank you,
Daniel
Not that this directly answers your original question, but I'm not convinced that you need a UDF for this type of transformation. For example, using standard SQL (uncheck "Use Legacy SQL" under "Show Options") you can perform an array transformation such as:
WITH T AS (SELECT [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] AS arr)
SELECT
x,
new_x
FROM T,
UNNEST(
ARRAY(SELECT AS STRUCT
x,
arr[OFFSET(CAST(RAND() * (off + 1) AS INT64))] AS new_x
FROM UNNEST(arr) AS x WITH OFFSET off));
+---+-------+
| x | new_x |
+---+-------+
| 0 | 1 |
| 1 | 2 |
| 2 | 0 |
| 3 | 2 |
| 4 | 4 |
| 5 | 0 |
| 6 | 5 |
| 7 | 4 |
| 8 | 8 |
| 9 | 3 |
+---+-------+
I can explain more if it helps, but the gist of the query is that it randomizes the elements in arr using the formula from your UDF above. The FROM T, UNNEST(... unrolls the elements of the array to make them easier to see, but I could have alternatively done:
WITH T AS (SELECT [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] AS arr)
SELECT
ARRAY(SELECT AS STRUCT
x,
arr[OFFSET(CAST(RAND() * (off + 1) AS INT64))] AS new_x
FROM UNNEST(arr) AS x WITH OFFSET off)
FROM T;
This gives an array of structs as output, where each x is associated with new_x.

Potential Duplicates Detection, with 3 Severity Level

I wanna make a program that detect a potential duplicates with 3 severity level.
let consider my data is only in two column, but with thousands row.
data in second column delimited only with comma. data example :
Number | Material
1 | helmet,valros,42
2 | helmet,iron,knight
3 | valros,helmet,42
4 | knight,helmet
5 | valros,helmet,42
6 | plain,helmet
7 | helmet, leather
and my 3 levels is :
very high : A,B,C vs A,B,C
high : A,B,C vs B,C,A
so so : A,B,C vs A,B
so far i just can make the first level, i don't know how to do the 2nd and the 3rd level.
what I've tried.
Sub duplicates_separation()
Dim duplicate(), i As Long
Dim delrange As Range, cell As Long
Dim shtIn As Worksheet, shtOut As Worksheet
Set shtIn = ThisWorkbook.Sheets("input")
Set shtOut = ThisWorkbook.Sheets("output")
x = 2
y = 1
Set delrange = shtIn.Range("b1:b10000") 'set your range here
ReDim duplicate(0)
'search duplicates in 2nd column
For cell = 1 To delrange.Cells.Count
If Application.CountIf(delrange, delrange(cell)) > 1 Then
ReDim Preserve duplicate(i)
duplicate(i) = delrange(cell).Address
i = i + 1
End If
Next
'print duplicates
For i = UBound(duplicate) To LBound(duplicate) Step -1
shtOut.Cells(x, 1).EntireRow.Value = shtIn.Range(duplicate(i)).EntireRow.Value
End Sub
duplicates detected by the program :
3 | valros,helmet,42
5 | valros,helmet,42
what i expected:
Number | Material
1 | helmet,valros,42
3 | valros,helmet,42
5 | valros,helmet,42
4 | knight,helmet
2 | helmet,iron,knight
i have an idea for detecting duplicates lv 2, but I think it will be so complicated and make the program slow.
turn column 2 to columns with "text to columns" command
sort column from A to Z (alphabetically)
concatenate the column
do countif like done in detecting duplicates lv 1
is there a way to detect the 2nd & 3rd level duplicates?
UPDATE
Yesterday I went to friend's house to consult about this problem, but his solution is in JAVA languange.. >which i don't understand
public class ali {
static void sPrint(String[] Printed) {
for (int iC = 0; iC < Printed.length; iC++) {
System.out.print(String.valueOf(Printed[iC]) + " | ");
}
System.out.println();
}
public static void main(String Args[]) {
int defaultLength = 10;
int indexID = 0;
int indexDesc = 1;
String[] DETECTORP1 = new String[defaultLength];
String[] DETECTORP2 = new String[defaultLength];
String[] DETECTORP3 = new String[defaultLength];
String[] DETECTORP4 = new String[defaultLength];
String[][] theString = new String[5][2];
theString[0] = new String[]{"1", "A, B, C, D"};
theString[1] = new String[]{"2", "A, B, C, D"};
theString[2] = new String[]{"3", "A, B, C, D, E"};
theString[3] = new String[]{"4", "A, B, D, C, E"};
theString[4] = new String[]{"5", "A, B, D, C, E, F"};
int P1 = 0;
int P2 = 0;
int P3 = 0;
int P4 = 0;
for (int iC = 0; iC < theString.length; iC++) {
System.out.println(theString[iC][indexID] + " -> " + theString[iC][indexDesc]);
}
for (int iC = 0; iC < theString.length; iC++) {
int LEX;
String theReference[] = theString[iC][indexDesc].replace(",", ";;").split(";;");
for (int iD = 0; iD < theString.length; iD++) {
if (iC != iD) {
String theCompare[] = theString[iD][1].replace(",", ";;").split(";;");
if (theReference.length == theCompare.length) {
LEX=0;
int theLength = theReference.length;
for (int iE = 0; iE < theLength; iE++) {
if (theReference[iE].equals(theCompare[iE])) {
LEX += 1;
}
}
if (LEX == theLength) {
DETECTORP1[P1] = theString[iC][indexID] + " WITH " + theString[iD][indexID];
P1 += 1;
} else {
LEX = 0;
for (int iF = 0; iF < theReference.length; iF++) {
for (int iG = 0; iG < theCompare.length; iG++) {
if (theReference[iF].equals(theCompare[iG])) {
LEX += 1;
break;
}
}
}
if (LEX == theReference.length) {
DETECTORP2[P2] = theString[iC][indexID] + " WITH " + theString[iD][indexID];
P2 += 1;
}
}
} else {
LEX = 0;
if (theReference.length > theCompare.length) {
for (int iF = 0; iF < theReference.length; iF++) {
for (int iG = 0; iG < theCompare.length; iG++) {
if (iG == iF) {
if (theReference[iF].equals(theCompare[iF])) {
LEX += 1;
break;
}
}
}
}
if (LEX <= theReference.length && LEX >= theCompare.length) {
DETECTORP3[P3] = theString[iC][indexID] + " WITH " + theString[iD][indexID];
P3 += 1;
}
} else {
LEX =0;
for (int iF = 0; iF < theCompare.length; iF++) {
for (int iG = 0; iG < theReference.length; iG++) {
if (iG == iF) {
if (theCompare[iF].equals(theReference[iF])) {
LEX += 1;
// System.out.println(theReference[iG] + "==" + theCompare[iG]);
break;
}
}
}
}
if (LEX <= theCompare.length && LEX >= theReference.length) {
DETECTORP3[P3] = theString[iC][indexID] + " WITH " + theString[iD][indexID];
P3 += 1;
}
}
}
}
}
}
sPrint(DETECTORP1);
sPrint(DETECTORP2);
sPrint(DETECTORP3);
}
}
how to do this in VBA?
Really, it depends how you want to define "severity level". Here's one way to do it, not necessarily the best: Use the Levensthein distance.
Represent each of your items by a one-character attribute symbol, e.g.
H helmet
K knight
I iron
$ Leather
^ Valros
╔ Plain
¢ Whatever
etc.
Then convert your Material lists into a string containing sequence of characters representing these attributes:
HIK = helmet,iron,knight
¢H = plain,helmet
Then compute the Levenshtein distance between those two strings. That will be your "severity level".
Debug.Print LevenshteinDistance("HIK","¢H")
'returns 3
Two implementations of the Levenshtein distance are shown in Wikipedia. And indeed you are in luck: someone on StackOverflow ported this to VBA.
In the comments section below, you say you don't like having to represent each of your possible attributes by one-character symbols. Fair enough; I agree this is a bit silly. Workaround: It is, in fact, possible to adapt the Levenshtein Distance algorithm to look not at each character in a string, but at each element of an array instead, and do comparisons based on that. I show how to make this change in my answer to your follow-up question.