Printing particularities of pgf77 (FORTRAN 77 behaviour?) - formatting

I compile and run this simple FORTRAN 77 program:
program test
write(6,*) '- - - - - - - - - - - - - - - - - - - - - - - - - - ',
& '- - - - - - - - - - - - - - - - - - - - - - - - - -'
write(6,'(2G15.5)') 0.1,0.0
end
with gfortran or f95 the output is:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
0.10000 0.0000
with pgf77 it is:
- - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - -
0.10000 0.00000E+00
and with g77 or ifort:
- - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - -
0.10000 0.0000
A couple of questions arise:
Why is 0.0 printed with four decimal places instead of five, as
requested in the format G15.5? Is this spec-compliant? And why
does pgf77 write it differently?
I guess the line break in the - - - - - - line with the last three
compilers is due to some limitation in the output line length... Is
there any way of increasing this limit, or otherwise force
single-line writes, at compile time?
By the way, the desired output is
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
0.10000 0.00000
which matches none of the above.

Exactly what the G edit descriptor causes to be printed is a little complicated but for the value 0.0 the standard (10.7.5.2.2 in the 2008 edition) states that the compiler should print a representation with d-1 (ie 4 in your example) digits in the fractional part of the number. So most of your compilers are behaving correctly; I think that pgf77 is incorrect but it may be working to an earlier standard with a different requirement.
The simplest fix for this is probably to use an f edit descriptor instead, (2F15.5).
As for the printing of the lines of hyphens, your use of the * edit descriptor, which causes list-directed output, surrenders precise control of the output to the compiler. My opinion is that it is a little perverse of a compiler to print the two parts of the expression on two lines but it is not non-standard behaviour.
If you want the hyphens printed all on one line take control of the output format, write(6,'(2A24)') or something similar ought to do it (I didn't count the hyphens for you, just guessed that there are 24 in each part of the output.) If that doesn't appeal to you simply write one string with all the hyphens in; that will probably get written on one line even using list-directed output.

Related

How do I add a line break to an external text log file from a pentaho transform?

I'm using pentaho pdi (spoon) I have a transform to compare 2 database tables (from a query selecting year and quarters within those tables), i'm then hoping to a merge rows (diff) to a filter rows if flagfield is not identical, which if success logs the matches, and if doesn't match logs the output, both with text file output steps...
my issue is my external log file gets appended and looks like this:
412542 - 21 - 4 - deleted - DOMAIN1
461623 - 22 - 1 - deleted - DOMAIN1
^failuresDOMAIN1 - 238388 - 12 - 4 - identical
DOMAIN1- 223016 - 13 - 1 - identical
DOMAIN1- 171764 - 13 - 2 - identical
DOMAIN1- 185569 - 13 - 3 - identical
DOMAIN1- 232247 - 13 - 4 - identical
DOMAIN1- 260057 - 14 - 1 - identical
^successes
I want this output:
412542 - 21 - 4 - deleted - DOMAIN1
461623 - 22 - 1 - deleted - DOMAIN1
^failures
DOMAIN1 - 238388 - 12 - 4 - identical
DOMAIN1- 223016 - 13 - 1 - identical
DOMAIN1- 171764 - 13 - 2 - identical
DOMAIN1- 185569 - 13 - 3 - identical
DOMAIN1- 232247 - 13 - 4 - identical
DOMAIN1- 260057 - 14 - 1 - identical
^successes
notice the line breaks between the successes and failures
using add a data grid w/ a "line_break" string that's simply a new line, then passing that to each "text file output" that logs as this "line_break" data column string value quickly which helps, but I can't seem to sequence the transform steps because they're parallel...

Get fraction between x.66 and x.99

I'm trying to make a query - (CASE - WHEN statement):
--all cols are numbers
case when round((O.POCET - O.P_DEL - O.P_DEL_DOD - O.P_FAK - O.P_VYD - O.P_OBJ)/SK.VELKE_BAL,3)
between 0.66 and 0.99 then 1
when round((O.POCET - O.P_DEL - O.P_DEL_DOD - O.P_FAK - O.P_VYD - O.P_OBJ)/SK.DOD_BAL,3)
between 0.66 and 0.99 then 2
else 0 end
It's working fine, but i need to get numbers between 1.66-1.99, 2.66-2.99 and so on...
IF i use LIKE, i can get the fraction, but not between theese numbers.
How can i solve this ?
If you just want it when the decimal part of the (positive) number is between a range then use the MOD function:
CASE
WHEN MOD(your_sum1, 1) BETWEEN 0.66 AND 0.99
THEN 1
WHEN MOD(your_sum2, 1) BETWEEN 0.66 AND 0.99
THEN 2
ELSE 0
END
db<>fiddle here

NaN loss values during traning an auto-encoder with multiple outputs

I am trying to optimize an autoencoder such that It also reproduces the calculated committor values simultaneously. The code looks like :
encoder_input=keras.Input(shape=(ncv,))
xencode=keras.layers.Dense(hidden_layers_encoder[0],activation='linear')(encoder_input)
for i in hidden_layers_encoder[1:]:
xencode=keras.layers.Dense(i,activation='linear')(xencode)
xencode=keras.layers.Dropout(0.1)(xencode)
encoder_output=keras.layers.Dense(n_bottleneck,activation='linear')(xencode)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
decoder_input=keras.layers.Dense(hidden_layers_decoder[0],activation='tanh'(encoder_output)
xdecode=decoder_input
xdecode=keras.layers.Dropout(0.1)(xdecode)
for j in range(nhid-1):
xdecode=keras.layers.Dense(hidden_layers_decoder[j+1],activation='tanh')(xdecode)
xdecode=keras.layers.Dropout(0.1)(xdecode)
decoder_output=keras.layers.Dense(ncv,activation='linear',name='decoder')(xdecode)
opt = keras.optimizers.Adam(lr=0.001)
auto_encoder = keras.Model(encoder_input, decoder_output, name="auto-encoder")
pb_input=keras.layers.Dense(hidden_layers_decoder[0],activation='sigmoid')(encoder_output)
pb_cal=keras.layers.Dropout(0.1)(pb_input)
for k in range(nhid-1):
pb_cal=keras.layers.Dense(hidden_layers_decoder[j+1],activation='sigmoid')(pb_cal)
pb_cal=keras.layers.Dropout(0.1)(pb_cal)
pb_output=keras.layers.Dense(npb,activation='sigmoid',name='pb_decoder')(encoder_output)
pbcoder = keras.Model(encoder_input, pb_output, name="pbcoder")
auto_encoder_pb = keras.Model(inputs=encoder_input, outputs=[decoder_output, pb_output], name="auto-encoder-pb")
auto_encoder_pb.compile(optimizer=opt,loss=['mse','mse'],metrics=['accuracy'])
history=auto_encoder_pb.fit(x_train, [x_train, y_train],validation_data=(x_test, [x_test, y_test]),batch_size=500,epochs=500)
The input dimension is 14 and I have used four hidden layers in all the cases with 56 neurons each. I have varied the dimension of the bottleneck from 1 to 8. I have thoroughly checked my data file to make sure that there are no NAN/Inf values. But while fitting it, is giving me :
Epoch 1/500
143/143 [==============================] - 1s 10ms/step - loss: nan - decoder_loss: nan - pb_decoder_loss: nan - decoder_accuracy: 0.0310 - pb_decoder_accuracy: 0.5448 - val_loss: nan - val_decoder_loss: nan - val_pb_decoder_loss: nan - val_decoder_accuracy: 0.0311 - val_pb_decoder_accuracy: 0.5421
Epoch 2/500
143/143 [==============================] - 1s 7ms/step - loss: nan - decoder_loss: nan - pb_decoder_loss: nan - decoder_accuracy: 0.0307 - pb_decoder_accuracy: 0.5448 - val_loss: nan - val_decoder_loss: nan - val_pb_decoder_loss: nan - val_decoder_accuracy: 0.0311 - val_pb_decoder_accuracy: 0.5421
Epoch 3/500
143/143 [==============================] - 1s 8ms/step - loss: nan - decoder_loss: nan - pb_decoder_loss: nan - decoder_accuracy: 0.0307 - pb_decoder_accuracy: 0.5448 - val_loss: nan - val_decoder_loss: nan - val_pb_decoder_loss: nan - val_decoder_accuracy: 0.0311 - val_pb_decoder_accuracy: 0.5421
How can I fix this?

What is this batch in the verbose output during model training in tensorflow 2.3.0?

I have specified the batch size = 16 and steps per epoch = 1000. The total samples in the dataset are 2628. The verbose output is shown below as well as in the image link.
Epoch 26/100
1000/1000 [==============================] - ETA: 0s - batch: 499.5000 - size: 16.0000 - loss: 0.2308 - Z_loss: 0.0200 - Z_logits_loss: 0.2108
lr = 0.001961161382496357, step = 26000
1000/1000 [==============================] - 3168s 3s/step - batch: 499.5000 - size: 16.0000 - loss: 0.2308 - Z_loss: 0.0200 - Z_logits_loss: 0.2108
What does batch = 499.5000 mean?
Model training Verbose output

How to work with foldRight() in Kotlin?

I try this code
println(listOf(1, 2, 4).foldRight(0) { total, next ->
total - next
})
I thought it works like 0 + 4 - 2 - 1 = 1.
But it returns 3. Why?
Sorry for this silly question.
foldRight works by accumulating the result from right to left. In your case, it is going to do
(1 - (2 - (4 - 0))) = (1 - (2 - 4)) = 1 - (-2) = 3
Note that your operation has its parameters in the wrong order, foldRight will pass you the next element as the first parameter and the accumulator as the second parameter. See https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/fold-right.html. If you swap them, you would have
(((0 - 4) - 2) - 1) = - 7
unless I am getting something wrong