i want to select parameters c and gamma for C-SVM classification using
the RBF (radial basis function) kernel with libsvm\tools\grid.py, but i dont know how it's possible? i installed libsvm and gnuplot and python and runned the grid.py in python, but it had error and didn't show the results.
%grid of parameters
folds = 5;
[C,gamma] = meshgrid(-5:2:15, -15:2:3);
%# grid search, and cross-validation
cv_acc = zeros(numel(C),1);
d= 2;
for i=1:numel(C)
cv_acc(i) = svmtrain(TrainLabel,TrainVec, ...
sprintf('-c %f -g %f -v %d -t %d', 2^C(i), 2^gamma(i), folds,d));
end
%# pair (C,gamma) with best accuracy
[~,idx] = max(cv_acc);
%# contour plot of paramter selection
contour(C, gamma, reshape(cv_acc,size(C))), colorbar
hold on;
text(C(idx), gamma(idx), sprintf('Acc = %.2f %%',cv_acc(idx)), ...
'HorizontalAlign','left', 'VerticalAlign','top')
hold off
xlabel('log_2(C)'), ylabel('log_2(\gamma)'), title('Cross-Validation Accuracy')
%# now you can train you model using best_C and best_gamma
best_C = 2^C(idx); best_gamma = 2^gamma(idx); %# ...
This performs grid search as well... but using matlab... not using grid.py... maybe this helps...
You could use the matlab script provided instead of grid.py FAQ
Q: How could I use MATLAB interface for parameter selection?
http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html#f803
bestcv = 0;
for log2c = -1:3,
for log2g = -4:1,
cmd = ['-v 5 -c ', num2str(2^log2c), ' -g ', num2str(2^log2g)];
cv = svmtrain(heart_scale_label, heart_scale_inst, cmd);
if (cv >= bestcv),
bestcv = cv; bestc = 2^log2c; bestg = 2^log2g;
end
fprintf('%g %g %g (best c=%g, g=%g, rate=%g)\n', log2c, log2g, cv, bestc, bestg, bestcv);
end
end
Related
I am struggling with optimization in Julia.
I used to use Matlab but I am trying to work on Julia instead.
The following is the code I wrote.
using Optim
V = fill(1.0, (18,14,5))
agrid = range(-2, stop=20, length=18)
dgrid = range(0.01, stop=24, length=14)
#zgrid = [0.5; 0.75; 1.0; 1.25; 1.5]
zgrid = [0.7739832502827438; 0.8797631785217791; 1.0; 1.1366695315439874; 1.2920176239404275]
# function
function adj_utility(V,s_a,s_d,s_z,i_z,c_a,c_d)
consumption = s_z + 1.0125*s_a + (1-0.018)*s_d - c_a - c_d - 0.05*(1-0.018)*s_d
if consumption >= 0
return (1/(1-2)) * (( (consumption^0.88) * (c_d^(1-0.88)) )^(1-2))
end
if consumption < 0
return -99999999
end
end
# Optimization
i_a = 1
i_d = 3
i_z = 1
utility_adj(x) = -adj_utility(V,agrid[i_a],dgrid[i_d],zgrid[i_z],i_z,x[1],x[2])
result1 = optimize(utility_adj, [1.0, 1.0], NelderMead())
If I use zgrid = [0.5; 0.75; 1.0; 1.25; 1.5], then the code works.
However, if I use zgrid = [0.7739832502827438; 0.8797631785217791; 1.0; 1.1366695315439874; 1.2920176239404275], I got an error message "DomainError with -0.3781249999999996"
In the function, if the consumption is less than 0 then the value should be -9999999 so I am not sure why I am getting this message.
Any help would be appreciated.
Thank you.
Raising negative numbers to non-integer powers returns complex numbers, which is where your error is coming from.
julia> (-0.37)^(1-0.88)
ERROR: DomainError with -0.37:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
Stacktrace:
[1] throw_exp_domainerror(::Float64) at ./math.jl:37
[2] ^(::Float64, ::Float64) at ./math.jl:888
[3] top-level scope at REPL[5]:1
You have a constraint that consumption must be strictly positive, but if you want consumption to be a real number you will need constraints that c_d is positive as well. You can either add this directly to your objective function as above, or you can use one of the constrained optimization algorithms in NLopt, which is available in Julia via the NLopt package.
I ran my code for an emotion detection model using Tensorflow Federated simulation. My code work perfectly fine using CPUs only. However, I received this error when trying to run TFF with GPU.
ValueError: Detected dataset reduce op in multi-GPU TFF simulation: `use_experimental_simulation_loop=True` for `tff.learning`; or use `for ... in iter(dataset)` for your own dataset iteration.Reduce op will be functional after b/159180073.
What is this error about and how can I fix it? I tried to search many places but found no answer.
Here is the call stack if it help. It is very long so I pasted into this link: https://pastebin.com/b1R93gf1
EDIT:
Here is the code containing iterative_process
def startTraining(output_file):
iterative_process = tff.learning.build_federated_averaging_process(
model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.01),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0),
use_experimental_simulation_loop=True
)
flstate = iterative_process.initialize()
evaluation = tff.learning.build_federated_evaluation(model_fn)
output_file.write(
'round,available_users,loss,sparse_categorical_accuracy,val_loss,val_sparse_categorical_accuracy,test_loss,test_sparse_categorical_accuracy\n')
curr_round_result = [0,0,100,0,100,0]
min_val_loss = 100
for round in range(1,ROUND_COUNT + 1):
available_users = fetch_available_users_and_increase_time(ROUND_DURATION_AVERAGE + random.randint(-ROUND_DURATION_VARIATION, ROUND_DURATION_VARIATION + 1))
if(len(available_users) == 0):
write_to_file(curr_round_result)
continue
train_data = make_federated_data(available_users, 'train')
flstate, metrics = iterative_process.next(flstate, train_data)
val_data = make_federated_data(available_users, 'val')
val_metrics = evaluation(flstate.model, val_data)
curr_round_result[0] = round
curr_round_result[1] = len(available_users)
curr_round_result[2] = metrics['train']['loss']
curr_round_result[3] = metrics['train']['sparse_categorical_accuracy']
curr_round_result[4] = val_metrics['loss']
curr_round_result[5] = val_metrics['sparse_categorical_accuracy']
write_to_file(curr_round_result)
Here is the code for make_federated_data
def make_federated_data(users, dataset_type):
offset = 0
if(dataset_type == 'val'):
offset = train_size
elif(dataset_type == 'test'):
offset = train_size + val_size
global LOADED_USER
for id in users:
if(id + offset not in LOADED_USER):
LOADED_USER[id + offset] = getDatasetFromFilePath(filepaths[id + offset])
return [
LOADED_USER[id + offset]
for id in users
]
TFF does support Multi-GPU, and as the error message says one of two things is happening:
The code is using tff.learning but using the default use_experimental_simulation_loop argument value of False. With multiple GPUs, this must be set to True when using APIs including tff.learning.build_federated_averaging_process. For example, calling with:
training_process = tff.learning.build_federated_averaging_process(
..., use_experimental_simulation_loop=True)
The code contains a custom tf.data.Dataset.reduce(...) call somewhere. This must be replaced with Python code that iterates over the dataset. For example:
result = dataset.reduce(initial_state=0, reduce_func=lambda s, x: s + x)
becomes
s = 0
for x in iter(dataset):
s += x
I realized that TFF has not yet supported multi-GPUs. Therefore, we need to limit number visible of GPUs to just 1, using:
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
I was trying to read tflite model and pull all the parameters of the layers out.
My steps:
I generated flatbuffers model representation by running (please build flatc before):
flatc -python tensorflow/tensorflow/lite/schema/schema.fbs
Result is tflite/ folder that contains layer description files (*.py) and some utilitarian files.
I successfully loaded model:
in case of import Error: set PYTHONPATH to point to the folder where tflite/ is
from tflite.Model import Model
def read_tflite_model(file):
buf = open(file, "rb").read()
buf = bytearray(buf)
model = Model.GetRootAsModel(buf, 0)
return model
I partly pulled model and node parameters out and stacked in iterating over nodes:
Model part:
def print_model_info(model):
version = model.Version()
print("Model version:", version)
description = model.Description().decode('utf-8')
print("Description:", description)
subgraph_len = model.SubgraphsLength()
print("Subgraph length:", subgraph_len)
Nodes part:
def print_nodes_info(model):
# what does this 0 mean? should it always be zero?
subgraph = model.Subgraphs(0)
operators_len = subgraph.OperatorsLength()
print('Operators length:', operators_len)
from collections import deque
nodes = deque(subgraph.InputsAsNumpy())
STEP_N = 0
MAX_STEPS = operators_len
print("Nodes info:")
while len(nodes) != 0 and STEP_N <= MAX_STEPS:
print("MAX_STEPS={} STEP_N={}".format(MAX_STEPS, STEP_N))
print("-" * 60)
node_id = nodes.pop()
print("Node id:", node_id)
tensor = subgraph.Tensors(node_id)
print("Node name:", tensor.Name().decode('utf-8'))
print("Node shape:", tensor.ShapeAsNumpy())
# which type is it? what does it mean?
type_of_tensor = tensor.Type()
print("Tensor type:", type_of_tensor)
quantization = tensor.Quantization()
min = quantization.MinAsNumpy()
max = quantization.MaxAsNumpy()
scale = quantization.ScaleAsNumpy()
zero_point = quantization.ZeroPointAsNumpy()
print("Quantization: ({}, {}), s={}, z={}".format(min, max, scale, zero_point))
# I do not understand it again. what is j, that I set to 0 here?
operator = subgraph.Operators(0)
for i in operator.OutputsAsNumpy():
nodes.appendleft(i)
STEP_N += 1
print("-"*60)
Please point me to documentation or some example of using this API.
My problems are:
I can not get documentation on this API
Iterating over Tensor objects seems not possible for me, as it doesn't have Inputs and Outputs methods. + subgraph.Operators(j=0) I do not understand what j means in here. Because of that my cycle goes through two nodes: input (once) and the next one over and over again.
Iterating over Operator objects is surely possible:
Here we iterate over them all but I can not get how to map Operator and Tensor.
def print_in_out_info_of_all_operators(model):
# what does this 0 mean? should it always be zero?
subgraph = model.Subgraphs(0)
for i in range(subgraph.OperatorsLength()):
operator = subgraph.Operators(i)
print('Outputs', operator.OutputsAsNumpy())
print('Inputs', operator.InputsAsNumpy())
I do not understand how to pull parameters out Operator object. BuiltinOptions method gives me Table object, that I do not know what to map at.
subgraph = model.Subgraphs(0)
What does this 0 mean? should it always be zero? obviously no, but what is it? Id of the subgraph? If so - I'm happy. If no, please try to explain it.
I need to implement a gaussian log likelihood loss function in Tensorflow, however I am not sure if what I wrote is correct. I think this is the correct definition of the loss function.
I went around implementing it like this:
two_pi = 2*np.pi
def gaussian_density_function(x, mean, stddev):
stddev2 = tf.pow(stddev, 2)
z = tf.multiply(two_pi, stddev2)
z = tf.pow(z, 0.5)
arg = -0.5*(x-mean)
arg = tf.pow(arg, 2)
arg = tf.div(arg, stddev2)
return tf.divide(tf.exp(arg), z)
mean_x, var_x = tf.nn.moments(dae_output_tensor, [0])
stddev_x = tf.sqrt(var_x)
loss_op_AE = -gaussian_density_function(inputs, mean_x, stddev_x)
loss_op_AE = tf.reduce_mean(loss_op_AE)
I want to use this as the loss function for an autoencoder, however, I am not sure this implementation is correct, since I get a NaN out of loss_op_AE.
EDIT: I also tried using:
mean_x, var_x = tf.nn.moments(autoencoder_output, axes=[1,2])
stddev_x = tf.sqrt(var_x)
dist = tf.contrib.distributions.Normal(mean_x, stddev_x)
loss_op_AE = -dist.pdf(inputs)
and I get the same NaN values.
Model the stddev as log stddev, this should fix the nan issue. So instead of pretending stddev is sigma^2, pretend it is the natural logarithm of sigma^2.
I'm trying to implement a feed-forward backpropagating autoencoder (training with gradient descent) and wanted to verify that I'm calculating the gradient correctly. This tutorial suggests calculating the derivative of each parameter one at a time: grad_i(theta) = (J(theta_i+epsilon) - J(theta_i-epsilon)) / (2*epsilon). I've written a sample piece of code in Matlab to do just this, but without much luck -- the differences between the gradient calculated from the derivative and the gradient numerically found tend to be largish (>> 4 significant figures).
If anyone can offer any suggestions, I would greatly appreciate the help (either in my calculation of the gradient or how I perform the check). Because I've simplified the code greatly to make it more readable, I haven't included a biases, and am no longer tying the weight matrices.
First, I initialize the variables:
numHidden = 200;
numVisible = 784;
low = -4*sqrt(6./(numHidden + numVisible));
high = 4*sqrt(6./(numHidden + numVisible));
encoder = low + (high-low)*rand(numVisible, numHidden);
decoder = low + (high-low)*rand(numHidden, numVisible);
Next, given some input image x, do feed-forward propagation:
a = sigmoid(x*encoder);
z = sigmoid(a*decoder); % (reconstruction of x)
The loss function I'm using is the standard Σ(0.5*(z - x)^2)):
% first calculate the error by finding the derivative of sum(0.5*(z-x).^2),
% which is (f(h)-x)*f'(h), where z = f(h), h = a*decoder, and
% f = sigmoid(x). However, since the derivative of the sigmoid is
% sigmoid*(1 - sigmoid), we get:
error_0 = (z - x).*z.*(1-z);
% The gradient \Delta w_{ji} = error_j*a_i
gDecoder = error_0'*a;
% not important, but included for completeness
% do back-propagation one layer down
error_1 = (error_0*encoder).*a.*(1-a);
gEncoder = error_1'*x;
And finally, check that the gradient is correct (in this case, just do it for the decoder):
epsilon = 10e-5;
check = gDecoder(:); % the values we obtained above
for i = 1:size(decoder(:), 1)
% calculate J+
theta = decoder(:); % unroll
theta(i) = theta(i) + epsilon;
decoderp = reshape(theta, size(decoder)); % re-roll
a = sigmoid(x*encoder);
z = sigmoid(a*decoderp);
Jp = sum(0.5*(z - x).^2);
% calculate J-
theta = decoder(:);
theta(i) = theta(i) - epsilon;
decoderp = reshape(theta, size(decoder));
a = sigmoid(x*encoder);
z = sigmoid(a*decoderp);
Jm = sum(0.5*(z - x).^2);
grad_i = (Jp - Jm) / (2*epsilon);
diff = abs(grad_i - check(i));
fprintf('%d: %f <=> %f: %f\n', i, grad_i, check(i), diff);
end
Running this on the MNIST dataset (for the first entry) gives results such as:
2: 0.093885 <=> 0.028398: 0.065487
3: 0.066285 <=> 0.031096: 0.035189
5: 0.053074 <=> 0.019839: 0.033235
6: 0.108249 <=> 0.042407: 0.065843
7: 0.091576 <=> 0.009014: 0.082562
Do not sigmoid on both a and z. Just use it on z.
a = x*encoder;
z = sigmoid(a*decoderp);