Mixed Integer Linear Optimization with Pyomo - Travelling salesman problem - optimization

I am trying to solve a travelling salesman problem with Pyomo framework. However, I am stuck, as the solver is informing me that I have formulated it as infeasible.
import numpy as np
import pyomo.environ as pyo
from pyomo.environ import *
from pyomo.opt import SolverFactory
journey_distances = np.array([[0, 28, 34, 45, 36],
[28, 0, 45, 52, 64],
[34, 45, 0, 11, 34],
[45, 52, 11, 0, 34],
[36, 64, 34, 34, 0]])
# create variables - integers
num_locations = M.shape[0]
model = pyo.ConcreteModel()
model.journeys = pyo.Var(range(num_locations), range(num_locations), domain=pyo.Binary, bounds = (0,None))
journeys = model.journeys
# add A to B constraints
model.AtoB = pyo.ConstraintList()
model.BtoA = pyo.ConstraintList()
AtoB = model.AtoB
BtoA = model.BtoA
AtoB_sum = [sum([ journeys[i,j] for j in range(num_locations) if i!=j]) for i in range(num_locations)]
BtoA_sum = [sum([ journeys[i,j] for i in range(num_locations) if j!=i]) for j in range(num_locations)]
for journey_sum in range(num_locations):
AtoB.add(AtoB_sum[journey_sum] == 1)
if journey_sum <num_locations -1:
BtoA.add(BtoA_sum[journey_sum] == 1)
# add auxilliary variables to ensure that each successive journey ends and starts on the same town. E.g. A to B, then B to C.
# u_j - u_i >= -(n+1) + num_locations*journeys_{ij} for i,j = 1...n, i!=j
model.successive_aux = pyo.Var(range(0,num_locations), domain = pyo.Integers, bounds = (0,num_locations-1))
model.successive_constr = pyo.ConstraintList()
successive_aux = model.successive_aux
successive_constr = model.successive_constr
successive_constr.add(successive_aux[0] == 1)
for i in range(num_locations):
for j in range(num_locations):
if i!=j:
successive_constr.add(successive_aux[j] - successive_aux[i] >= -(num_locations - 1) + num_locations*journeys[i,j])
obj_sum = sum([ sum([journey_distances [i,j]*journeys[i,j] for j in range(num_locations) if i!=j]) for i in range(num_locations)])
model.obj = pyo.Objective(expr = obj_sum, sense = minimize)
opt = SolverFactory('cplex')
opt.solve(model)
journey_res = np.array([model.journeys[journey].value for journey in journeys])
print(journey_res)
# results output is:
print(results)
Problem:
- Lower bound: -inf
Upper bound: inf
Number of objectives: 1
Number of constraints: 31
Number of variables: 26
Number of nonzeros: 98
Sense: unknown
Solver:
- Status: ok
User time: 0.02
Termination condition: infeasible
Termination message: MIP - Integer infeasible.
Error rc: 0
Time: 0.10198116302490234
# model.pprint()
7 Set Declarations
AtoB_index : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 5 : {1, 2, 3, 4, 5}
BtoA_index : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 4 : {1, 2, 3, 4}
journeys_index : Size=1, Index=None, Ordered=False
Key : Dimen : Domain : Size : Members
None : 2 : journeys_index_0*journeys_index_1 : 25 : {(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (2, 0), (2, 1), (2, 2), (2, 3), (2, 4), (3, 0), (3, 1), (3, 2), (3, 3), (3, 4), (4, 0), (4, 1), (4, 2), (4, 3), (4, 4)}
journeys_index_0 : Size=1, Index=None, Ordered=False
Key : Dimen : Domain : Size : Members
None : 1 : Any : 5 : {0, 1, 2, 3, 4}
journeys_index_1 : Size=1, Index=None, Ordered=False
Key : Dimen : Domain : Size : Members
None : 1 : Any : 5 : {0, 1, 2, 3, 4}
successive_aux_index : Size=1, Index=None, Ordered=False
Key : Dimen : Domain : Size : Members
None : 1 : Any : 5 : {0, 1, 2, 3, 4}
successive_constr_index : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 21 : {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21}
2 Var Declarations
journeys : Size=25, Index=journeys_index
Key : Lower : Value : Upper : Fixed : Stale : Domain
(0, 0) : 0 : None : 1 : False : True : Binary
(0, 1) : 0 : None : 1 : False : True : Binary
(0, 2) : 0 : None : 1 : False : True : Binary
(0, 3) : 0 : None : 1 : False : True : Binary
(0, 4) : 0 : None : 1 : False : True : Binary
(1, 0) : 0 : None : 1 : False : True : Binary
(1, 1) : 0 : None : 1 : False : True : Binary
(1, 2) : 0 : None : 1 : False : True : Binary
(1, 3) : 0 : None : 1 : False : True : Binary
(1, 4) : 0 : None : 1 : False : True : Binary
(2, 0) : 0 : None : 1 : False : True : Binary
(2, 1) : 0 : None : 1 : False : True : Binary
(2, 2) : 0 : None : 1 : False : True : Binary
(2, 3) : 0 : None : 1 : False : True : Binary
(2, 4) : 0 : None : 1 : False : True : Binary
(3, 0) : 0 : None : 1 : False : True : Binary
(3, 1) : 0 : None : 1 : False : True : Binary
(3, 2) : 0 : None : 1 : False : True : Binary
(3, 3) : 0 : None : 1 : False : True : Binary
(3, 4) : 0 : None : 1 : False : True : Binary
(4, 0) : 0 : None : 1 : False : True : Binary
(4, 1) : 0 : None : 1 : False : True : Binary
(4, 2) : 0 : None : 1 : False : True : Binary
(4, 3) : 0 : None : 1 : False : True : Binary
(4, 4) : 0 : None : 1 : False : True : Binary
successive_aux : Size=5, Index=successive_aux_index
Key : Lower : Value : Upper : Fixed : Stale : Domain
0 : 0 : None : 4 : False : True : Integers
1 : 0 : None : 4 : False : True : Integers
2 : 0 : None : 4 : False : True : Integers
3 : 0 : None : 4 : False : True : Integers
4 : 0 : None : 4 : False : True : Integers
1 Objective Declarations
obj : Size=1, Index=None, Active=True
Key : Active : Sense : Expression
None : True : minimize : 28*journeys[0,1] + 34*journeys[0,2] + 45*journeys[0,3] + 36*journeys[0,4] + 28*journeys[1,0] + 45*journeys[1,2] + 52*journeys[1,3] + 64*journeys[1,4] + 34*journeys[2,0] + 45*journeys[2,1] + 11*journeys[2,3] + 34*journeys[2,4] + 45*journeys[3,0] + 52*journeys[3,1] + 11*journeys[3,2] + 34*journeys[3,4] + 36*journeys[4,0] + 64*journeys[4,1] + 34*journeys[4,2] + 34*journeys[4,3]
3 Constraint Declarations
AtoB : Size=5, Index=AtoB_index, Active=True
Key : Lower : Body : Upper : Active
1 : 1.0 : journeys[0,1] + journeys[0,2] + journeys[0,3] + journeys[0,4] : 1.0 : True
2 : 1.0 : journeys[1,0] + journeys[1,2] + journeys[1,3] + journeys[1,4] : 1.0 : True
3 : 1.0 : journeys[2,0] + journeys[2,1] + journeys[2,3] + journeys[2,4] : 1.0 : True
4 : 1.0 : journeys[3,0] + journeys[3,1] + journeys[3,2] + journeys[3,4] : 1.0 : True
5 : 1.0 : journeys[4,0] + journeys[4,1] + journeys[4,2] + journeys[4,3] : 1.0 : True
BtoA : Size=4, Index=BtoA_index, Active=True
Key : Lower : Body : Upper : Active
1 : 1.0 : journeys[1,0] + journeys[2,0] + journeys[3,0] + journeys[4,0] : 1.0 : True
2 : 1.0 : journeys[0,1] + journeys[2,1] + journeys[3,1] + journeys[4,1] : 1.0 : True
3 : 1.0 : journeys[0,2] + journeys[1,2] + journeys[3,2] + journeys[4,2] : 1.0 : True
4 : 1.0 : journeys[0,3] + journeys[1,3] + journeys[2,3] + journeys[4,3] : 1.0 : True
successive_constr : Size=21, Index=successive_constr_index, Active=True
Key : Lower : Body : Upper : Active
1 : 1.0 : successive_aux[0] : 1.0 : True
2 : -Inf : -4 + 5*journeys[0,1] - (successive_aux[1] - successive_aux[0]) : 0.0 : True
3 : -Inf : -4 + 5*journeys[0,2] - (successive_aux[2] - successive_aux[0]) : 0.0 : True
4 : -Inf : -4 + 5*journeys[0,3] - (successive_aux[3] - successive_aux[0]) : 0.0 : True
5 : -Inf : -4 + 5*journeys[0,4] - (successive_aux[4] - successive_aux[0]) : 0.0 : True
6 : -Inf : -4 + 5*journeys[1,0] - (successive_aux[0] - successive_aux[1]) : 0.0 : True
7 : -Inf : -4 + 5*journeys[1,2] - (successive_aux[2] - successive_aux[1]) : 0.0 : True
8 : -Inf : -4 + 5*journeys[1,3] - (successive_aux[3] - successive_aux[1]) : 0.0 : True
9 : -Inf : -4 + 5*journeys[1,4] - (successive_aux[4] - successive_aux[1]) : 0.0 : True
10 : -Inf : -4 + 5*journeys[2,0] - (successive_aux[0] - successive_aux[2]) : 0.0 : True
11 : -Inf : -4 + 5*journeys[2,1] - (successive_aux[1] - successive_aux[2]) : 0.0 : True
12 : -Inf : -4 + 5*journeys[2,3] - (successive_aux[3] - successive_aux[2]) : 0.0 : True
13 : -Inf : -4 + 5*journeys[2,4] - (successive_aux[4] - successive_aux[2]) : 0.0 : True
14 : -Inf : -4 + 5*journeys[3,0] - (successive_aux[0] - successive_aux[3]) : 0.0 : True
15 : -Inf : -4 + 5*journeys[3,1] - (successive_aux[1] - successive_aux[3]) : 0.0 : True
16 : -Inf : -4 + 5*journeys[3,2] - (successive_aux[2] - successive_aux[3]) : 0.0 : True
17 : -Inf : -4 + 5*journeys[3,4] - (successive_aux[4] - successive_aux[3]) : 0.0 : True
18 : -Inf : -4 + 5*journeys[4,0] - (successive_aux[0] - successive_aux[4]) : 0.0 : True
19 : -Inf : -4 + 5*journeys[4,1] - (successive_aux[1] - successive_aux[4]) : 0.0 : True
20 : -Inf : -4 + 5*journeys[4,2] - (successive_aux[2] - successive_aux[4]) : 0.0 : True
21 : -Inf : -4 + 5*journeys[4,3] - (successive_aux[3] - successive_aux[4]) : 0.0 : True
13 Declarations: journeys_index_0 journeys_index_1 journeys_index journeys AtoB_index AtoB BtoA_index BtoA successive_aux_index successive_aux successive_constr_index successive_constr obj
If anyone can see what the problem is, and let me know, then that would be a great help.

I'm not overly familiar w/ coding TSP problems, and I'm not sure of all the details in your code, but this (below) is a problem. It seems you are coding successive_aux (call it sa for short) as a sequencing of integers. In this snippet (I chopped down to 3 points), if you think about the legal route of 0-1-2-0, sa_1 > sa_0 and sa_2 > sa_1, then it is infeasible to require sa_0 > sa_2. Also, your bounds on sa appear infeasible as well. In this example, sa_0 is 1, and the upper bound on sa is 2. Those are 2 "infeasibilities" in your formulation.
Key : Lower : Body : Upper : Active
1 : 1.0 : successive_aux[0] : 1.0 : True
2 : -Inf : -2 + 3*journeys[0,1] - (successive_aux[1] - successive_aux[0]) : 0.0 : True
3 : -Inf : -2 + 3*journeys[0,2] - (successive_aux[2] - successive_aux[0]) : 0.0 : True
4 : -Inf : -2 + 3*journeys[1,0] - (successive_aux[0] - successive_aux[1]) : 0.0 : True
5 : -Inf : -2 + 3*journeys[1,2] - (successive_aux[2] - successive_aux[1]) : 0.0 : True
6 : -Inf : -2 + 3*journeys[2,0] - (successive_aux[0] - successive_aux[2]) : 0.0 : True
7 : -Inf : -2 + 3*journeys[2,1] - (successive_aux[1] - successive_aux[2]) : 0.0 : True

I'm not an optimization expert but it looks like you need to change the distances between the cities since you're basically saying that the distance from city1 to city1 = 0, city2 to city2 = 0 etc. If you change these distances to a very large number (say 1000000) the optimizer will never pick to go from city1 back to city1.
Hope this helps.

Related

Pandas How to rename columns that don't have names, but they're indexed as 0, 1, 2, 3... etc

I don't know how to rename columns that are unnamed.
I have tried both approaches where I am putting the indices in quoutes and not, like this, and it didn't work:
train_dataset_with_pred_new_df.rename(columns={
0 : 'timestamp', 1 : 'open', 2 : 'close', 3 : 'high', 4 : 'low', 5 : 'volume', 6 : 'CCI7', 7 : 'DI+',\
8 : 'DI-', 9 : 'ADX', 10 : 'MACD Main', 11 : 'MACD Signal', 12 : 'MACD histogram', 13 : 'Fisher Transform',\
14 : 'Fisher Trigger'
})
And
train_dataset_with_pred_new_df.rename(columns={
'0' : 'timestamp', '1' : 'open', '2' : 'close', '3' : 'high', '4' : 'low', '5' : 'volume', '6' : 'CCI7', '8' : 'DI+',\
'9' : 'DI-', '10' : 'ADX', '11' : 'MACD Main', '12' : 'MACD Signal', '13' : 'MACD histogram', '15' : 'Fisher Transform',\
'16' : 'Fisher Trigger'
})
So If both didn't worked, how do I rename them?
Thank you for your help in advance :)
pandas.DataFrame.rename returns a new DataFrame if the parameter inplace is False.
You need to reassign your dataframe :
train_dataset_with_pred_new_df= train_dataset_with_pred_new_df.rename(columns={
0 : 'timestamp', 1 : 'open', 2 : 'close', 3 : 'high', 4 : 'low', 5 : 'volume', 6 : 'CCI7', 7 : 'DI+',\
8 : 'DI-', 9 : 'ADX', 10 : 'MACD Main', 11 : 'MACD Signal', 12 : 'MACD histogram', 13 : 'Fisher Transform',\
14 : 'Fisher Trigger'})
Or simply use inplace=True:
train_dataset_with_pred_new_df.rename(columns={
0 : 'timestamp', 1 : 'open', 2 : 'close', 3 : 'high', 4 : 'low', 5 : 'volume', 6 : 'CCI7', 7 : 'DI+',
8 : 'DI-', 9 : 'ADX', 10 : 'MACD Main', 11 : 'MACD Signal', 12 : 'MACD histogram', 13 : 'Fisher Transform',
14 : 'Fisher Trigger'
}, inplace=True)
df.rename(columns={ df.columns[1]: "your value" }, inplace = True)
What you are trying to do is renaming the index. Instead of renaming existing columns you are renaming index. So rename index and not columns.
train_dataset_with_pred_new_df.rename(
index={ 0 : 'timestamp', 1 : 'open', 2 : 'close', 3 : 'high', 4 : 'low', 5 : 'volume', 6 : 'CCI7', 7 : 'DI+', 8 : 'DI-', 9 : 'ADX', 10 : 'MACD Main', 11 : 'MACD Signal', 12 : 'MACD histogram', 13 : 'Fisher Transform', 14 : 'Fisher Trigger'
}, inplace=True)
As it looks like you want to reassign all names, simply do:
df.columns = ['timestamp', 'open', 'close', 'high', 'low', 'volume',
'CCI7', 'DI+', 'DI-', 'ADX', 'MACD Main', 'MACD Signal',
'MACD histogram', 'Fisher Transform', 'Fisher Trigger']
Or, in a chain:
df.set_axis(['timestamp', 'open', 'close', 'high', 'low', 'volume',
'CCI7', 'DI+', 'DI-', 'ADX', 'MACD Main', 'MACD Signal',
'MACD histogram', 'Fisher Transform', 'Fisher Trigger'],
axis=1)

Remove nan from pandas binner

I have created the following pandas dataframe called train:
import pandas as pd
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
import scipy.stats as stats
ds = {
'matchKey' : [621062, 622750, 623508, 626451, 626611, 626796, 627114, 630055, 630225],
'og_max_last_dpd' : [10, 10, -99999, 10, 10, 10, 10, 10, 10],
'og_min_last_dpd' : [10, 10, -99999, 10, 10, 10, 10, 10, 10],
'og_max_max_dpd' : [0, 0, -99999, 1, 0, 5, 0, 4, 0],
'Target':[1,0,1,0,0,1,1,1,0]
}
train = pd.DataFrame(data=ds)
The dataframe looks like this:
print(train)
matchKey og_max_last_dpd og_min_last_dpd og_max_max_dpd Target
0 621062 10 10 0 1
1 622750 10 10 0 0
2 623508 -99999 -99999 -99999 1
3 626451 10 10 1 0
4 626611 10 10 0 0
5 626796 10 10 5 1
6 627114 10 10 0 1
7 630055 10 10 4 1
8 630225 10 10 0 0
I have then binned the column called og_max_max_dpd using this code:
def mono_bin(Y, X, char, n=20):
X2 = X.fillna(-99999)
r = 0
while np.abs(r) < 1:
d1 = pd.DataFrame({"X": X2, "Y": Y, "Bucket": pd.qcut(X2, n, duplicates="drop")})#,include_lowest=True
d2 = d1.groupby("Bucket", as_index=True)
r, p = stats.spearmanr(d2.mean().X, d2.mean().Y)
n = n - 1
d3 = pd.DataFrame(d2.min().X, columns=["min_" + X.name])
d3["max_" + X.name] = d2.max().X
d3[Y.name] = d2.sum().Y
d3["total"] = d2.count().Y
d3[Y.name + "_rate"] = d2.mean().Y
d4 = (d3.sort_values(by="min_" + X.name)).reset_index(drop=True)
# print("=" * 85)
# print(d4)
ninf = float("-inf")
pinf = float("+inf")
array = []
for i in range(len(d4) - 1):
array.append(d4["max_" + char].iloc[i])
return [ninf] + array + [pinf]
binner = mono_bin(train['Target'], train['og_max_max_dpd'], 'og_max_max_dpd')
I have printed out the binner which looks like this:
print(binner)
[-inf, -99999.0, nan, 0.0, nan, nan, 1.0, nan, nan, 4.0, nan, inf]
I want to remove the nan from that list so that the binner looks like this:
[-inf, -99999.0, 0.0, 1.0, 4.0, inf]
Does anyone know how to remove the nan?
You can simply use dropna to remove it from d4:
...
d3[Y.name + "_rate"] = d2.mean().Y
d4 = (d3.sort_values(by="min_" + X.name)).reset_index(drop=True)
d4.dropna(inplace=True)
# print("=" * 85)
# print(d4)
ninf = float("-inf")
...

Recode in Python using the dictionary approach?

I'm pretty new to python and I'm trying to work out how to use the dictionary for recoding a variable into a new one.
I'm looking to recode the existing values of 1 and 2 into 1 and 3 and 4 into 2:
recode1 = {1 : 1, 2 : 1, 3 : 2, 4 : 2}
df['recode'].map(recode1)
It looks like all of the outputted variables are being given the NaN value.
recode1 = {1 : 1, 2 : 1, 3 : 2, 4 : 2}
recode1[1]=recode1[1]
recode1[3]=recode1[2]
recode1[2]=recode1[4]
recode1
{1: 1, 2: 2, 3: 1, 4: 2}

how to use fold and reduce in grouping in kotlin

I know how to use reduce and fold operations but, I'm not getting how to use it with map.
val numbers = listOf("one", "two", "three", "four", "five")
println(numbers.groupingBy { it.first() }.eachCount()) // Output:- {o=1, t=2, f=2}
grouping returns Map. So, i need to figure out how to use fold and reduce with kotlin Map.
Any example is ok. I just need to use reduce and fold with grouping in kotlin.
I seriously never used this in my project. You can understand the code below.
it % 5 gives various remainder values 0,1,2,3,4
i.e 5%5 = 0 ; 5%6 = 1; 5%7= 2; 5%8 = 3; 5%9 = 4; 5%5 = 0
val numb = listOf(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20)
val nmap = numb.groupingBy { it % 5 }
println(nmap.eachCount())
println("map = ${nmap.reduce { key, accumulator, element ->
println("$key ($accumulator,$element)")
accumulator + element
}}")
Output
{1=4, 2=4, 3=4, 4=4, 0=4}
1 (1,6)
2 (2,7)
3 (3,8)
4 (4,9)
0 (5,10) ---> 5 +
1 (7,11)
2 (9,12)
3 (11,13)
4 (13,14)
0 (15,15) ----> 5 + 15 +
1 (18,16)
2 (21,17)
3 (24,18)
4 (27,19)
0 (30,20) -----> 5 + 15 + 30
map = {1=34, 2=38, 3=42, 4=46, 0=50}

H264 encoding and decoding using Videotoolbox

I was testing the encoding and decoding using videotoolbox, to convert the captured frames to H264 and using that data to display it in AVSampleBufferdisplayLayer.
error here while decompress CMVideoFormatDescriptionCreateFromH264ParameterSets with error code -12712
I follow this code from mobisoftinfotech.com
status = CMVideoFormatDescriptionCreateFromH264ParameterSets(
kCFAlloc‌​‌ atorDefault, 2,
(const uint8_t const)parameterSetPointers,
parameterSetSizes, 4, &_formatDesc);
videoCompressionTest; can anyone figure out the problem?
I am not sure if you did figure out the problem yet. However, I found 2 places in your code that leading to the error. After fixed them and run locally your test app, it seems to be working fine. (Tested with Xcode 9.4.1, MacOS 10.13)
The first one is in -(void)CompressAndConvertToData:(CMSampleBufferRef)sampleBuffer method where the while loop should be like this
while (bufferOffset < blockBufferLength - AVCCHeaderLength) {
// Read the NAL unit length
uint32_t NALUnitLength = 0;
memcpy(&NALUnitLength, bufferDataPointer + bufferOffset, AVCCHeaderLength);
// Convert the length value from Big-endian to Little-endian
NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
// Write start code to the elementary stream
[elementaryStream appendBytes:startCode length:startCodeLength];
// Write the NAL unit without the AVCC length header to the elementary stream
[elementaryStream appendBytes:bufferDataPointer + bufferOffset + AVCCHeaderLength
length:NALUnitLength];
// Move to the next NAL unit in the block buffer
bufferOffset += AVCCHeaderLength + NALUnitLength;
}
uint8_t *bytes = (uint8_t*)[elementaryStream bytes];
int size = (int)[elementaryStream length];
[self receivedRawVideoFrame:bytes withSize:size];
The second place is the decompression code where you process for NALU type 8, the block of code in if(nalu_type == 8) statement. This is a tricky one.
To fix it, update
for (int i = _spsSize + 12; i < _spsSize + 50; i++)
to
for (int i = _spsSize + 12; i < _spsSize + 12 + 50; i++)
And you are freely to remove this hack
//was crashing here
if(_ppsSize == 0)
_ppsSize = 4;
Why? Lets print out the frame packet format.
po frame
▿ 4282 elements
- 0 : 0
- 1 : 0
- 2 : 0
- 3 : 1
- 4 : 39
- 5 : 100
- 6 : 0
- 7 : 30
- 8 : 172
- 9 : 86
- 10 : 193
- 11 : 112
- 12 : 247
- 13 : 151
- 14 : 64
- 15 : 0
- 16 : 0
- 17 : 0
- 18 : 1
- 19 : 40
- 20 : 238
- 21 : 60
- 22 : 176
- 23 : 0
- 24 : 0
- 25 : 0
- 26 : 1
- 27 : 6
- 28 : 5
- 29 : 35
- 30 : 71
- 31 : 86
- 32 : 74
- 33 : 220
- 34 : 92
- 35 : 76
- 36 : 67
- 37 : 63
- 38 : 148
- 39 : 239
- 40 : 197
- 41 : 17
- 42 : 60
- 43 : 209
- 44 : 67
- 45 : 168
- 46 : 0
- 47 : 0
- 48 : 3
- 49 : 0
- 50 : 0
- 51 : 3
- 52 : 0
- 53 : 2
- 54 : 143
- 55 : 92
- 56 : 40
- 57 : 1
- 58 : 221
- 59 : 204
- 60 : 204
- 61 : 221
- 62 : 2
- 63 : 0
- 64 : 76
- 65 : 75
- 66 : 64
- 67 : 128
- 68 : 0
- 69 : 0
- 70 : 0
- 71 : 1
- 72 : 37
- 73 : 184
- 74 : 32
- 75 : 1
- 76 : 223
- 77 : 205
- 78 : 248
- 79 : 30
- 80 : 231
… more
The first NALU start code if (nalu_type == 7) is 0, 0, 0, 1 from index of 15 to 18. The next 0, 0, 0, 1 (from 23 to 26) is type 6, type 8 NALU start code is from 68 to 71. That why I modify the for loop a bit to scan from start index (_spsSize + 12) with a range of 50.
I haven't fully tested your code to make sure encode and decode work properly as expected. However, I hope this finding would help you.
By the way, if there is any misunderstanding, I would love to learn from your comments.