JWK and python cryptography package - can't reproduce matching pub key - cryptography

I have a JWK generated on a test website:
key = {
"kty": "EC",
"d": "MXrxKTl_o9yIQlExYy9c1LcWZX_OwX3aw-oGP0flUdo",
"use": "sig",
"crv": "secp256k1",
"kid": "Im53aoD8zJoHzOXmfIAUkncONCIeR1pgy_nhvQrwN3s",
"x": "hHXNLbjBY_SFeP-tOPoyoGGYjISm-m3aVJLpc3suka0",
"y": "yYIjrvo_lqrsdxq-oMQQxBG8eyIUKmF9XazdwdGTwSY",
"alg": "ES256"
}
I should convert this into PEM formatting, with python:
curve = ec.SECP256R1()
signature_algorithm = ec.ECDSA(hashes.SHA256())
padding_factor = (4 - len(key['d']) % 4) % 4
padded_secret = key['d']+ '='*padding_factor
secret_bytes = base64.urlsafe_b64decode(padded_secret)
secret_int = int.from_bytes(secret_bytes, 'big')
priv_key = ec.derive_private_key(secret_int, curve, default_backend())
pem_priv = priv_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncryption()
)
At this point I test:
pub_key = pem_priv.public_key()
x = (pub_key.public_numbers().x)
x_bytes = x.to_bytes(32, byteorder="big")
x_encoded = base64.urlsafe_b64encode(number_bytes)
self.assertTrue(key["x"]==x_encoded.decode())
This fails.

Related

Groupby value_counts giving keyerror

I am trying to plot countries whose scale has changes over time.
this is the dataset i am using :'https://www.kaggle.com/datasets/whenamancodes/the-global-hunger-index'
wasting = pd.read_csv('/kaggle/input/the-global-hunger-index/share-of-children-with-a-weight-too-low-for-their-height-wasting.csv')
# rename the column
wasting.rename(columns={'Prevalence of wasting, weight for height (% of children under 5)':'Wasting'},inplace=True)
#create new column with pd.cut
bins = [0,9.9,19.99,34.99,49.99,np.inf]
labels = ['Low','Moderate','Serious','Alarming','Extremely Alarming']
wasting['W_Scale'] = pd.cut(wasting['Wasting'],bins=bins,labels=labels,right=False).astype('category')
wasting.head()
wasting.isna().sum()
#selecting countries with w_scale greater than 1
wasting_entity_scale = wasting.groupby('Entity').filter(lambda x: x['W_Scale'].nunique()>1)
wasting_entity_scale = wasting_entity_scale.groupby(['Year','Entity'])['W_Scale'].value_counts().reset_index(name='count')
wasting_entity_scale = wasting_entity_scale[wasting_entity_scale['count']>0]
wasting_entity_scale = wasting_entity_scale.reset_index(drop=True)
#until this point everything is fine.
traces = {}
for i, (loc, d) in enumerate(wasting_entity_scale.groupby("Entity")):
# use meta so that we know which country a trace belongs to
fig = px.histogram(
d, x="Year", y="Entity", color="level_2"
).update_traces(meta=loc, visible=(i == 0))
traces[loc] = fig.data
l = fig.layout
# integrate all the traces
fig = go.Figure([t for a in traces.values() for t in a]).update_layout(l)
# now buuld menu using meta to know which traces should be visible per country
fig.update_layout(
updatemenus=[
{
"active": 0,
"buttons": [
{
"label": c,
"method": "update",
"args": [
{"visible": [t.meta == c for t in fig.data]},
{"title": c},
],
}
for c in traces.keys()
],
}
]
)
when i try to plot it, it shows this error:
KeyError: 'Serious'
Can someone please teach me what is it that i am doing wrong.
Thank you.

mxnet model convert to onnx success but ort.InferenceSession(model) failed

Ask a Question
I success convert mxnet model to onnx but it failed when inference .The model 's shape is (1,1,100,100)
convert code
sym = 'single-symbol.json'
params = '/single-0090.params'
input_shape = (1, 1, 100, 100)
onnx_file = './model.onnx'
converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file,verbose=True)
model= onnx.load_model(converted_model_path)
checker.check_graph(model.graph)
checker.check_model(model)
output
INFO:root:Input shape of the model [(1, 1, 100, 100)]
INFO:root:Exported ONNX file ./model.onnx saved to disk
inference code
sess = ort.InferenceSession("./model.onnx")
output
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException:
[ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION :
Exception during initialization:
/onnxruntime/core/providers/cpu/nn/pool_attributes.h:77
onnxruntime::PoolAttributes::PoolAttributes(const OpNodeProtoHelper<onnxruntime::ProtoHelperNodeContext> &,
const std::string &, int) pads[dim] < kernel_shape[dim] &&
pads[dim + kernel_shape.size()] < kernel_shape[dim] was false.
Pad should be smaller than kernel.
Question
mxnet pooling node json
{
"op": "Pooling",
"name": "pool1_fwd",
"attrs": {
"count_include_pad": "True",
"global_pool": "False",
"kernel": "(4, 4)",
"layout": "NCHW",
"pad": "(4, 4)",
"pool_type": "avg",
"pooling_convention": "valid",
"stride": "(4, 4)"
},
"inputs": [[46, 0, 0]]
}
I change the "pad": "(4, 4)" to "pad": "(3, 3)" smaller than "kernel": "(4, 4), then try convert again.
sess = ort.InferenceSession("./model.onnx")
output = sess.run(None, {"data": data.astype(np.float32)})
it worked,but the output value is not right.
how to fix it ?
BTW:convert the mxnet model to ncnn all is right(not change anything,pad=(4,4),kernel=(4,4))
Further information
python:3.8
onnx:1.10.2
mxnet:1.8.0
I fix it,recode model by pytorch and copy weights,use nn.ZeroPad2d(4) before avgpooling:
self.pad = nn.ZeroPad2d(4)
self.pool = nn.AvgPool2d(kernel_size=(4,4),stride=(4,4))
X = self.pool(self.pad(self.conv(X)))

Error: Size(XX) must match the product of shape x,x,x,x

This is a newbie question, but any help will be appreciated.
I'm having a problem with a 3D tensor in TensorFlow.JS (node), with the following code:
const tf = require('#tensorflow/tfjs-node');
(async ()=>{
let list = [
{
xs: [
[
[ 0.7910133603149169, 0.7923634491520086, 0.79166712455722, 0.7928027625311359, 0.4426631841175303, 0.018719529693542337 ],
[ 0.7890709817505044, 0.7943561081665688, 0.7915865358198619, 0.7905450669351226, 0.4413258183256521, 0.04449784810703526 ],
[ 0.7940229392692819, 0.7924745639669473, 0.7881395357356101, 0.7880208892359736, 0.40902353356570315, 0.14643954229459097 ],
[ 0.801474878324385, 0.8003822349633881, 0.7969969705961001, 0.7939094034872144, 0.40227041242732126, 0.03893523221469505 ],
[ 0.8022503526561848, 0.8011600386679555, 0.7974621873981194, 0.8011488339557422, 0.43008361179994464, 0.11210020422004835 ],
],
[
[ 0.8034111510684465, 0.7985390234525179, 0.7949321830852709, 0.7943788081438548, 0.5739870761673189, 0.13358267460835263 ],
[ 0.805714476773561, 0.8072996569653942, 0.8040745782073486, 0.8035592212810225, 0.5899031300445114, 0.03229758335964042 ],
[ 0.8103322733081704, 0.8114317495511435, 0.8073606480159334, 0.8057140734135828, 0.5842202187553198, 0.01986941729798157 ],
[ 0.815132106874313, 0.8122641403791668, 0.8104353115275772, 0.8103395749739932, 0.5838313552472632, 0.03332674037143093 ],
[ 0.8118480102237944, 0.8166500561770489, 0.8128943005604122, 0.8147644523703373, 0.601619389872815, 0.04807286626501376 ],
]
],
ys: 1
}
];
const ds = tf.data.generator(async () => {
let index = 0;
return {
next: async () => {
if(index >= list.length) return { done : true };
let doc = list[index];
index++;
return {
value: {
xs : doc.xs,
ys : doc.ys
},
done: false
};
}
};
}).batch(1);
let model = tf.sequential();
model.add(tf.layers.dense({units: 60, activation: 'relu', inputShape: [2, 5, 6]}));
model.compile({
optimizer: tf.train.adam(),
loss: 'sparseCategoricalCrossentropy',
metrics: ['accuracy']
});
await model.fitDataset(ds, {epochs: 1});
return true;
})().then(console.log).catch(console.error);
This code generate the following error:
Error: Size(60) must match the product of shape 1,2,5,60
at Object.inferFromImplicitShape
I didn't understand why the layer is changing the last value of the inputShape from 6 to 60 (which is the expected output units for this layer).
Just to confirm, as far I know the units should be the product of: batchSize * x * y * z, in the example case: 1 * 2 * 5 * 6 = 60
Thank you!
Software specification:
tfjs-node: v1.2.11
Node JS: v11.2.0
OS: Ubuntu 18.04.2
Ok, the problem is that a fully connected layer (ts.layer.dense) expect a tensor1d as input, as described in this other question: Why do we flatten the data before we feed it into tensorflow?
So, to do the trick, the tensor must be re-shaped before the fully connected layer, as:
return {
value: {
xs : ts.reshape(doc.xs, [-1]),
ys : doc.ys
},
done: false
};
Where the -1 in ts.reshape(tensor, [-1]), means to the transformation function flatten the tensor.
For a visual demonstration, here a YouTube video: CNN Flatten Operation Visualized

You must feed a value for placeholder tensor 'input_example_tensor' with dtype string and shape [1]

I am developing a tensorflow serving client/server application by using chatbot-retrieval project.
My code has two parts, namely serving part and client part.
Below is the code snippet for the serving parts.
def get_features(context, utterance):
context_len = 50
utterance_len = 50
features = {
"context": context,
"context_len": tf.constant(context_len, shape=[1,1], dtype=tf.int64),
"utterance": utterance,
"utterance_len": tf.constant(utterance_len, shape=[1,1], dtype=tf.int64),
}
return features
def my_input_fn(estimator, input_example_tensor ):
feature_configs = {
'context':tf.FixedLenFeature(shape=[50], dtype=tf.int64),
'utterance':tf.FixedLenFeature(shape=[50], dtype=tf.int64)
}
tf_example = tf.parse_example(input_example_tensor, feature_configs)
context = tf.identity(tf_example['context'], name='context')
utterance = tf.identity(tf_example['utterance'], name='utterance')
features = get_features(context, utterance)
return features
def my_signature_fn(input_example_tensor, features, predictions):
feature_configs = {
'context':tf.FixedLenFeature(shape=[50], dtype=tf.int64),
'utterance':tf.FixedLenFeature(shape=[50], dtype=tf.int64)
}
tf_example = tf.parse_example(input_example_tensor, feature_configs)
tf_context = tf.identity(tf_example['context'], name='tf_context_utterance')
tf_utterance = tf.identity(tf_example['utterance'], name='tf_utterance')
default_graph_signature = exporter.regression_signature(
input_tensor=input_example_tensor,
output_tensor=tf.identity(predictions)
)
named_graph_signatures = {
'inputs':exporter.generic_signature(
{
'context':tf_context,
'utterance':tf_utterance
}
),
'outputs':exporter.generic_signature(
{
'scores':predictions
}
)
}
return default_graph_signature, named_graph_signatures
def main():
##preliminary codes here##
estimator.fit(input_fn=input_fn_train, steps=100, monitors=[eval_monitor])
estimator.export(
export_dir = FLAGS.export_dir,
input_fn = my_input_fn,
use_deprecated_input_fn = True,
signature_fn = my_signature_fn,
exports_to_keep = 1
)
Below is the code snippet for the client part.
def tokenizer_fn(iterator):
return (x.split(" ") for x in iterator)
vp = tf.contrib.learn.preprocessing.VocabularyProcessor.restore(FLAGS.vocab_processor_file)
input_context = "biz banka kart farkli bir banka atmsinde para"
input_utterance = "farkli banka kart biz banka atmsinde para"
context_feature = np.array(list(vp.transform([input_context])))
utterance_feature = np.array(list(vp.transform([input_utterance])))
context_tensor = tf.contrib.util.make_tensor_proto(context_feature, shape=[1, context_feature.size])
utterance_tensor = tf.contrib.util.make_tensor_proto(context_feature, shape=[1, context_feature.size])
request.inputs['context'].CopyFrom(context_tensor)
request.inputs['utterance'].CopyFrom(utterance_tensor)
result_counter.throttle()
result_future = stub.Predict.future(request, 5.0) # 5 seconds
result_future.add_done_callback(
_create_rpc_callback(label[0], result_counter))
return result_counter.get_error_rate()
Both of the serving and client parts builds with no error. After running the serving application and then the client application I get the following strange error propogated to the client application when the rpc call completes.
Below is the error I get when rpc call completes
AbortionError(code=StatusCode.INVALID_ARGUMENT, details="You must feed a value for placeholder tensor 'input_example_tensor' with dtype string and shape [1]
[[Node: input_example_tensor = Placeholder[_output_shapes=[[1]], dtype=DT_STRING, shape=[1], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]")
The error is strange since there seems to be no way to feed the placeholder from the client application.
How can I provide data for the placeholder 'input_example_tensor' if I am accessing the model through tensorflow serving?
ANSWER:
(I posted my answer here since I couldn't post it as an answer due to lack of StackOverflow badges. Anyone who is volunteer to submit it as his/her answer to the question is more than welcome. I will approve it as the answer.)
I could resolve the problem by using the option use_deprecated_input_fn = False in estimator.export function and change the input signatures accordingly.
Below is the final code which is running with no problem.
def get_features(input_example_tensor, context, utterance):
context_len = 50
utterance_len = 50
features = {
"my_input_example_tensor": input_example_tensor,
"context": context,
"context_len": tf.constant(context_len, shape=[1,1], dtype=tf.int64),
"utterance": utterance,
"utterance_len": tf.constant(utterance_len, shape=[1,1], dtype=tf.int64),
}
return features
def my_input_fn():
input_example_tensor = tf.placeholder(tf.string, name='tf_example_placeholder')
feature_configs = {
'context':tf.FixedLenFeature(shape=[50], dtype=tf.int64),
'utterance':tf.FixedLenFeature(shape=[50], dtype=tf.int64)
}
tf_example = tf.parse_example(input_example_tensor, feature_configs)
context = tf.identity(tf_example['context'], name='context')
utterance = tf.identity(tf_example['utterance'], name='utterance')
features = get_features(input_example_tensor, context, utterance)
return features, None
def my_signature_fn(input_example_tensor, features, predictions):
default_graph_signature = exporter.regression_signature(
input_tensor=input_example_tensor,
output_tensor=predictions
)
named_graph_signatures = {
'inputs':exporter.generic_signature(
{
'context':features['context'],
'utterance':features['utterance']
}
),
'outputs':exporter.generic_signature(
{
'scores':predictions
}
)
}
return default_graph_signature, named_graph_signatures
def main():
##preliminary codes here##
estimator.fit(input_fn=input_fn_train, steps=100, monitors=[eval_monitor])
estimator._targets_info = tf.contrib.learn.estimators.tensor_signature.TensorSignature(tf.constant(0, shape=[1,1]))
estimator.export(
export_dir = FLAGS.export_dir,
input_fn = my_input_fn,
input_feature_key ="my_input_example_tensor",
use_deprecated_input_fn = False,
signature_fn = my_signature_fn,
exports_to_keep = 1
)
OP self-solved but couldn't self-answer, so here's their answer:
Problem was fixed by using the option use_deprecated_input_fn = False in estimator.export function and changing the input signatures accordingly:
def my_signature_fn(input_example_tensor, features, predictions):
default_graph_signature = exporter.regression_signature(
input_tensor=input_example_tensor,
output_tensor=predictions
)
named_graph_signatures = {
'inputs':exporter.generic_signature(
{
'context':features['context'],
'utterance':features['utterance']
}
),
'outputs':exporter.generic_signature(
{
'scores':predictions
}
)
}
return default_graph_signature, named_graph_signatures
def main():
##preliminary codes here##
estimator.fit(input_fn=input_fn_train, steps=100, monitors=[eval_monitor])
estimator._targets_info = tf.contrib.learn.estimators.tensor_signature.TensorSignature(tf.constant(0, shape=[1,1]))
estimator.export(
export_dir = FLAGS.export_dir,
input_fn = my_input_fn,
input_feature_key ="my_input_example_tensor",
use_deprecated_input_fn = False,
signature_fn = my_signature_fn,
exports_to_keep = 1
)

R EVMIX convert pdf to uniform marginals

I'm trying to convert a distribution into a pseudo-uniform distribution. Using the spd R package, it is easy and it works as expected.
library(spd)
x <- c(rnorm(100,-1,0.7),rnorm(100,3,1))
fit<-spdfit(x,upper=0.9,lower=0.1,tailfit="GPD", kernelfit="epanech")
uniformX = pspd(x,fit)
I want to generalize extreme value modeling to include threshold uncertainity. So I used the evmix package.
library(evmix)
x <- c(rnorm(100,-1,0.7),rnorm(100,3,1))
fit = fgkg(x, phiul = FALSE, phiur = FALSE, std.err = FALSE)
pgkg(x,fit$lambda, fit$ul, fit$sigmaul, fit$xil, fit$phiul, fit$ur,
fit$sigmaur, fit$xir, fit$phiur)
Im messing up somewhere.
Please check out the help for pgkg function:
help(pgkg)
which gives the syntax:
pgkg(q, kerncentres, lambda = NULL, ul = as.vector(quantile(kerncentres,
0.1)), sigmaul = sqrt(6 * var(kerncentres))/pi, xil = 0, phiul = TRUE,
ur = as.vector(quantile(kerncentres, 0.9)), sigmaur = sqrt(6 *
var(kerncentres))/pi, xir = 0, phiur = TRUE, bw = NULL,
kernel = "gaussian", lower.tail = TRUE)
You have missed the kernel centres (the data), which is always needed for kernel density estimators. Here is the corrected code:
library(evmix)
x <- c(rnorm(100,-1,0.7),rnorm(100,3,1))
fit = fgkg(x, phiul = FALSE, phiur = FALSE, std.err = FALSE)
prob = pgkg(x, x, fit$lambda, fit$ul, fit$sigmaul, fit$xil, fit$phiul,
fit$ur, fit$sigmaur, fit$xir, fit$phiur)
hist(prob) % now uniform as expected