Binarization in Spyder(Python 3.9) code using sk-learn(preprocessing) and NumPy - numpy

I attach a screenshot of the code
I am new to Python and am currently studying artificial intelligence, working in Spyder(python 3.9)
After executing the code, I expected this output
Binarized data:
[[1.о.1.]
[о.1.о.]
[1.о.о.]
[1.о.о.]]

In Python it is important to write one command in one line:
data_binarized = preprocessing.Binarizer(your_code)
If you want to write it in two lines, you can use implicit line continuation (Possible only inside parentheses, brackets and braces):
data_binarized = preprocessing.Binarizer(
your_code)
As an second possible option you can use the backslash (explicit line continuation):
data_binarized =\
preprocessing.Binarizer(your_code)
For more information about this look at this answer:
https://stackoverflow.com/a/4172465/21187993

Related

This code is directly from a textbook for Cisco Devnet yet presents a syntax error in python 3.8.1 shell. Why? I tried viewing text in notepad++

while True:
string = input('Enter some text to print. \nType "done" to quit>')
if string == 'done':
break
print(string)
print('Done!')
SyntaxError: invalid syntax
image of issue
Image after idz's suggestion
I think your problem is that you wrote:
while true:
instead of
while True:
However, if you are using Python 2 you should be aware that input will attempt to evaluate what you type in as Python. Depending on what your aim is, you may want to use raw_input instead. This is not an issue if your are using Python 3.

Bazel Checkers Support

What options do Bazel provide for creating new or extending existing targets that call C/C++-code checkers such as
qac
cppcheck
iwyu
?
Do I need to use a genrule or is there some other target rule for that?
Is https://bazel.build/versions/master/docs/be/extra-actions.html my only viable choice here?
In security critical software industries, such as aviation and automotive, it's very common to use the results of these calls to collect so called "metric reports".
In these cases, calls to such linters must have outputs that are further processed by the build actions of these metric report collectors. In such cases, I cannot find a useful way of reusing Bazel's "extra-actions". Ideas any one?
I've written something which uses extra actions to generate a compile_commands.json file used by clang-tidy and other tools, and I'd like to do the same kind of thing for iwyu when I get around to it. I haven't used those other tools, but I assume they fit the same pattern too.
The basic idea is to run an extra action which generates some output for each file (aka C/C++ compilation command), and then find all the output files afterwards (outside of Bazel) and aggregate them. A reasonably complete example is here for reference. Basically, the action listener (written in Python) decodes the extra action proto and extracts the source files, compiler options, etc:
action = extra_actions_base_pb2.ExtraActionInfo()
with open(argv[1], 'rb') as f:
action.MergeFromString(f.read())
cpp_compile_info = action.Extensions[extra_actions_base_pb2.CppCompileInfo.cpp_compile_info]
compiler = cpp_compile_info.tool
options = ' '.join(cpp_compile_info.compiler_option)
source = cpp_compile_info.source_file
output = cpp_compile_info.output_file
print('%s %s -c %s -o %s' % (compiler, options, source, output))
If you give the extra action an output template, then it can write that output to a file. If you give the output files distinctive names, you can find them all in the output tree and merge them together however you want.
A more sophisticated option is to use bazel query --output=proto and write code to calculate the extra action output filenames of the targets you're interested in from there. That requires writing more code, but you don't have problems with old output files in the output tree that are accidentally included when aggregating.
FWIW, Aspects are another possibility. However, I think extra actions work acceptably for this.

conflict between fortran+iso_c_binding (via ctypes or cython) and matplotlib when reading namelist [only with python Anaconda!!]

[EDIT: the problem only applies with python anaconda, not with standard /usr/bin/python2.7]
[FYI: the gist referred to in this post can still be useful for anyone trying to use fortran with ctypes or cython, credit to http://www.fortran90.org/src/best-practices.html]
When using a fortran code from within python (using iso_c_bindings), either via ctypes or via cython, I ran into a weird incompatibility problem with matplotlib. Basically, if matplotlib is "activated" (via %pylab or by using pyplot.plot command), reading the namelist will omit any digit !! i.e. The value 9.81 is read as 9.00. Without matplotlib, no problem.
I made a minimal working example gist.github.com.
Basically, the fortran module just allow reading a double precision parameter g from a namelist, and store it as global module variable. It can also print its value to string, and allow directly setting its value from the outside. This makes three functions:
read_par
print_par
set_par
You can download the gist example and then run:
make ctypes
python test_ctypes.py
test_ctypes.py contains:
from ctypes import CDLL, c_double
import matplotlib.pyplot as plt
f = CDLL('./lib.so')
print "Read param and print to screen"
f.read_par()
f.print_par()
# calling matplotlib's plot command seem to prevent
# subsequent namelist reading
print "Call matplotlib.pyplot's plot"
plt.plot([1,2],[3,4])
print "Print param just to be sure: everything is fine"
f.print_par()
print "But any new read will lose decimals on the way!"
f.read_par()
f.print_par()
print "Directly set parameter works fine"
f.set_par(c_double(9.81))
f.print_par()
print "But reading from namelist really does not work anymore"
f.read_par()
f.print_par()
With the output:
Read param and print to screen
g 9.8100000000000005
Call matplotlib.pyplot's plot
Print param just to be sure: everything is fine
g 9.8100000000000005
But any new read will lose decimals on the way!
g 9.0000000000000000
Directly set parameter works fine
g 9.8100000000000005
But reading from namelist really does not work anymore
g 9.0000000000000000
The same happen with the cython example (make clean; make cython; python test_cython.py).
Does anyone know what is going on, or whether there is any workaround?? The main reason why I wrote a wrapper to my fortran code is to be able to play around with a model, set parameters (via namelist), run it, plot the result, set other parameters and so on. So for my use case this bug kinds of defies the purpose of interactivity...
Many thanks for any hint.
PS: I am happy to file a bug somewhere, but would not know where (gfortran? matplotlib?)

Display variables using CBC MPS input in NEOS

Am trying to use NEOS to solve a linear program using MPS input.
The MPS file is fine, but apparently you need a "paramaters file" as well to tell the solver what to do (min/max etc.). However I can't find any information on this online anywhere.
So far I have got NEOS to solve a maximization problem and display the objective function. However I cannot get it to display the variables.
Does anyone know what code I should add to the paramters file to tell NEOS/CBC to display the resulting variables?
The parameter file consists of a list of Cbc (standalone) commands in a file (one per line). The format of the commands is (quoting the documentation):
One command per line (and no -)
abcd? gives list of possibilities, if only one + explanation
abcd?? adds explanation, if only one fuller help(LATER)
abcd without value (where expected) gives current value
abcd value or abcd = value sets value
The commands are the following:
? dualT(olerance) primalT(olerance) inf(easibilityWeight)
integerT(olerance) inc(rement) allow(ableGap) ratio(Gap)
fix(OnDj) tighten(Factor) log(Level) slog(Level)
maxN(odes) strong(Branching) direction error(sAllowed)
gomory(Cuts) probing(Cuts) knapsack(Cuts) oddhole(Cuts)
clique(Cuts) round(ingHeuristic) cost(Strategy) keepN(ames)
scaling directory solver import
export save(Model) restore(Model) presolve
initialS(olve) branch(AndBound) sol(ution) max(imize)
min(imize) time(Limit) exit stop
quit - stdin unitTest
miplib ver(sion)
To see the solution values, you should include the line sol - after the min or max line of your parameter file.
If this doesn't work you can submit the problem to NEOS in AMPL format via this page. In addition to model and data files, it accepts a commands file where you can use statements to solve the problem and display the solution, for example:
solve;
display _varname, _var;
This post describes how to convert MPS to AMPL.

program to reproduce itself and be useful -- not a quine

I have a program which performs a useful task. Now I want to produce the plain-text source code when the compiled executable runs, in addition to performing the original task. This is not a quine, but is probably related.
This capability would be useful in general, but my specific program is written in Fortran 90 and uses Mako Templates. When compiled it has access to the original source code files, but I want to be able to ensure that the source exists when a user runs the executable.
Is this possible to accomplish?
Here is an example of a simple Fortran 90 which does a simple task.
program exampl
implicit none
write(*,*) 'this is my useful output'
end program exampl
Can this program be modified such that it performs the same task (outputs a string when compiled) and outputs a Fortran 90 text file containing the source?
Thanks in advance
It's been so long since I have touched Fortran (and I've never dealt with Fortran 90) that I'm not certain but I see a basic approach that should work so long as the language supports string literals in the code.
Include your entire program inside itself in a block of literals. Obviously you can't include the literals within this, instead you need some sort of token that tells your program to include the block of literals.
Obviously this means you have two copies of the source, one inside the other. As this is ugly I wouldn't do it that way, but rather store your source with the include_me token in it and run it through a program that builds the nested files before you compile it. Note that this program will share a decent amount of code with the routine that recreates the code from the block of literals. If you're going to go this route I would also make the program spit out the source for this program so whoever is trying to modify the files doesn't need to deal with the two copies.
My original program (see question) is edited: add an include statement
Call this file "exampl.f90"
program exampl
implicit none
write(*,*) "this is my useful output"
open(unit=2,file="exampl_out.f90")
include "exampl_source.f90"
close(2)
end program exampl
Then another program (written in Python in this case) reads that source
import os
f=open('exampl.f90') # read in exampl.f90
g=open('exampl_source.f90','w') # and replace each line with write(*,*) 'line'
for line in f:
#print 'write(2,*) \''+line.rstrip()+'\'\n',
g.write('write(2,*) \''+line.rstrip()+'\'\n')
f.close
g.close
# then complie exampl.f90 (which includes exampl_source.f90)
os.system('gfortran exampl.f90')
os.system('/bin/rm exampl_source.f90')
Running this python script produces an executable. When the executable is run, it performs the original task AND prints the source code.