my first test with boto fails with SignatureDoesNotMatch - amazon-s3

So here is my first test for S3 buckets using boto:
import boto
user_name, access_key, secret_key = "testing-user", "xxxxxxxxxxxxx", "xxxxxxxx/xxxxxxxxxxxx/xxxxxxxxxx(xxxxx)"
conn = boto.connect_s3(access_key, secret_key)
buckets = conn.get_all_buckets()
I get the following error:
Traceback (most recent call last):
File "test-s3.py", line 9, in <module>
buckets = conn.get_all_buckets()
File "xxxxxx/lib/python2.7/site-packages/boto/s3/connection.py", line 440, in get_all_buckets
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>AKIAJMHSZXU6MORWA5GA</AWSAccessKeyId><StringToSign>GET
Mon, 18 May 2015 06:21:58 GMT
/</StringToSign><SignatureProvided>c/+YJAZVInsfmd5giMQmrh81DPA=</SignatureProvided><StringToSignBytes>47 45 54 0a 0a 0a 4d 6f 6e 2c 20 31 38 20 4d 61 79 20 32 30 31 35 20 30 36 3a 32 31 3a 35 38 20 47 4d 54 0a 2f</StringToSignBytes><RequestId>5733F9C8926497E6</RequestId><HostId>FXPejeYuvZ+oV2DJLh7HBpryOh4Ve3Mmj8g8bKA2f/4dTWDHJiG8Bpir8EykLYYW1OJMhZorbIQ=</HostId></Error>
How am I supposed to fix this?
Boto version is 2.38.0

Had the same issue. In my case, my generated security key had a special character '+' in between. So I deleted my key and regenerated a new key and it worked with the new key with no '+'.
Source

Today, I saw an error response with SignatureDoesNotMatch while playing around an S3 API locally and replacing localhost with 127.0.0.1 fixed the problem in my case.

Related

kafka consumer .net 'Protocol message end-group tag did not match expected tag.'

I am trying to read data from kafka as you can see :
var config = new ConsumerConfig
{
BootstrapServers = ""*******,
GroupId = Guid.NewGuid().ToString(),
AutoOffsetReset = AutoOffsetReset.Earliest
};
MessageParser<AdminIpoChange> parser = new(() => new AdminIpoChange());
using (var consumer = new ConsumerBuilder<Ignore, byte[]>(config).Build())
{
consumer.Subscribe("AdminIpoChange");
while (true)
{
AdminIpoChange item = new AdminIpoChange();
var cr = consumer.Consume();
item = parser.ParseFrom(new ReadOnlySpan<byte>(cr.Message.Value).ToArray());
}
consumer.Close();
}
I am using google protobuf for send and receive data .This code returns this error in parser line:
KafkaConsumer.ConsumeAsync: Protocol message end-group tag did not match expected tag.
Google.Protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.
at Google.Protobuf.ParsingPrimitivesMessages.CheckLastTagWas(ParserInternalState& state, UInt32 expectedTag)
at Google.Protobuf.ParsingPrimitivesMessages.ReadGroup(ParseContext& ctx, Int32 fieldNumber, UnknownFieldSet set)
at Google.Protobuf.UnknownFieldSet.MergeFieldFrom(ParseContext& ctx)
at Google.Protobuf.UnknownFieldSet.MergeFieldFrom(UnknownFieldSet unknownFields, ParseContext& ctx)
at AdminIpoChange.pb::Google.Protobuf.IBufferMessage.InternalMergeFrom(ParseContext& input) in D:\MofidProject\domain\obj\Debug\net6.0\Protos\Rlc\AdminIpoChange.cs:line 213
at Google.Protobuf.ParsingPrimitivesMessages.ReadRawMessage(ParseContext& ctx, IMessage message)
at Google.Protobuf.CodedInputStream.ReadRawMessage(IMessage message)
at AdminIpoChange.MergeFrom(CodedInputStream input) in D:\MofidProject\domain\obj\Debug\net6.0\Protos\Rlc\AdminIpoChange.cs:line 188
at Google.Protobuf.MessageExtensions.MergeFrom(IMessage message, Byte[] data, Boolean discardUnknownFields, ExtensionRegistry registry)
at Google.Protobuf.MessageParser`1.ParseFrom(Byte[] data)
at infrastructure.Queue.Kafka.KafkaConsumer.ConsumeCarefully[T](Func`2 consumeFunc, String topic, String group) in D:\MofidProject\infrastructure\Queue\Kafka\KafkaConsumer.cs:line 168
D:\MofidProject\mts.consumer.plus\bin\Debug\net6.0\mts.consumer.plus.exe (process 15516) exited with code -1001.
To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops.'
Updated:
My sample data that comes from Kafka :
- {"SymbolName":"\u0641\u062F\u0631","SymbolIsin":"IRo3pzAZ0002","Date":"1400/12/15","Time":"08:00-12:00","MinPrice":17726,"MaxPrice":21666,"Share":1000,"Show":false,"Operation":0,"Id":"100d8e0b54154e9d902054bff193e875","CreateDateTime":"2022-02-26T09:47:20.0134757+03:30"}
My rlc Model :
syntax = "proto3";
message AdminIpoChange
{
string Id =1;
string SymbolName =2;
string SymbolIsin =3;
string Date =4;
string Time=5;
double MinPrice =6;
double MaxPrice =7;
int32 Share =8;
bool Show =9;
int32 Operation =10;
string CreateDateTime=11;
enum AdminIpoOperation
{
Add = 0;
Edit = 1;
Delete = 2;
}
}
My data in bytes :
7B 22 53 79 6D 62 6F 6C 4E 61 6D 65 22 3A 22 5C 75 30 36 34 31 5C 75 30 36 32 46 5C 75 30
36 33 31 22 2C 22 53 79 6D 62 6F 6C 49 73 69 6E 22 3A 22 49 52 6F 33 70 7A 41 5A 30 30 30
32 22 2C 22 44 61 74 65 22 3A 22 31 34 30 30 2F 31 32 2F 31 35 22 2C 22 54 69 6D 65 22 3A
22 30 38 3A 30 30 2D 31 32 3A 30 30 22 2C 22 4D 69 6E 50 72 69 63 65 22 3A 31 37 37 32 36
2C 22 4D 61 78 50 72 69 63 65 22 3A 32 31 36 36 36 2C 22 53 68 61 72 65 22 3A 31 30 30 30
2C 22 53 68 6F 77 22 3A 66 61 6C 73 65 2C 22 4F 70 65 72 61 74 69 6F 6E 22 3A 30 2C 22 49
64 22 3A 22 31 30 30 64 38 65 30 62 35 34 31 35 34 65 39 64 39 30 32 30 35 34 62 66 66 31
39 33 65 38 37 35 22 2C 22 43 72 65 61 74 65 44 61 74 65 54 69 6D 65 22 3A 22 32 30 32 32
2D 30 32 2D 32 36 54 30 39 3A 34 37 3A 32 30 2E 30 31 33 34 37 35 37 2B 30 33 3A 33 30 22
7D
The data is definitely not protobuf binary; byte 0 starts a group with field number 15; inside this group is:
field 4, string
field 13, fixed32
field 6, varint
field 12, fixed32
field 6, varint
after this (at byte 151), an end-group token is encountered with field number 6
There are many striking things about this:
your schema doesn't use groups (in fact, the mere existence of groups is now hard to find in the docs), so ... none of this looks right
end-group tokens are always required to match the last start-group field number, which it doesn't
fields inside a single level are usually (although as a "should", not a "must") written in numerical order
you have no field 12 or 13 declared
your field 6 is of the wrong type - we expect fixed64 here, but got varint
So: there's no doubt about it: that data is ... not what you expect. It certainly isn't valid protobuf binary. Without knowing how that data is stored, all we can do is guess, but on a hunch: let's try decoding it as UTF8 and see what it looks like:
{"SymbolName":"\u0641\u062F\u0631","SymbolIsin":"IRo3pzAZ0002","Date":"1400/12/15","Time":"08:00-12:00","MinPrice":17726,"MaxPrice":21666,"Share":1000,"Show":false,"Operation":0,"Id":"100d8e0b54154e9d902054bff193e875","CreateDateTime":"2022-02-26T09:47:20.0134757+03:30"}
or (formatted)
{
"SymbolName":"\u0641\u062F\u0631",
"SymbolIsin":"IRo3pzAZ0002",
"Date":"1400/12/15",
"Time":"08:00-12:00",
"MinPrice":17726,
"MaxPrice":21666,
"Share":1000,
"Show":false,
"Operation":0,
"Id":"100d8e0b54154e9d902054bff193e875",
"CreateDateTime":"2022-02-26T09:47:20.0134757+03:30"
}
Oops! You've written the data as JSON, and you're trying to decode it as binary protobuf. Decode it as JSON instead, and you should be fine. If this was written with the protobuf JSON API: decode it with the protobuf JSON API.

Prevent Envoy from modifying the sharding key

We use a two-layer Envoy setup.
[front-end] -> E -> [middleware] -> E -> [backend]
Middleware is supposed to take the sharding key from the HTTP metadata and re-transmit it when talking to the backend.
What we have noticed is that Envoy modifies the HTTP header, which is crashing our service inside gRPC.
E1016 11:19:45.808599731 19 call.cc:912] validate_metadata: {"created":"#1602847185.808584663","description":"Illegal header value","file":"external/com_github_grpc_grpc/src/core/lib/surface/validate_metadata.cc","file_line":44,"offset":56,"raw_bytes":"36 37 36 38 33 61 34 34 36 35 36 35 37 30 34 33 36 66 36 34 36 35 34 31 34 39 33 61 36 35 36 33 36 63 36 39 37 30 37 33 36 35 32 64 37 30 36 63 37 35 36 37 36 39 36 65 a5 '67683a44656570436f646541493a65636c697073652d706c7567696e.'\u0000"}
E1016 11:19:45.808619606 19 call_op_set.h:947] assertion failed: false
Any way to avoid this?
UPDATE:
Seems to be only happening with x- headers.
The problem was actually not related to Envoy in the end. Turns out that gRPC strings are not null terminated.

Unblock code PIN with APDU commands: error "67 00" --> Wrong length

By using WinsCard.dll, I want to use APDU commands to reset PIN code and set a new into the smartcard. But when I launch these commands, I obtain error "67 00" ("Wrong length").
My APDU commands:
// First command, I verify the code PUK (return "90 00")
00 20 00 02 08 36 35 32 34 39 38 37 36
// Second command, I try to set a new code PIN into the card
00 2C 03 01 0C 36 35 32 34 39 38 37 36 31 32 33 34
For second command:
36 35 32 34 39 38 37 36 -> code PUK
31 32 33 34 -> new code PIN
After some searches, the only explanation that I have found is that the "Lc" parameter was wrong. But, in my case, it is equal to "0C", and the length of my data is "0C".
So, I don't understand where is my error.
Have you got an idea?
Thank you very much for your help!
Note:
If I reset the code PIN without put a new PIN (it restores previous code PIN), it works fine:
00 20 00 02 08 31 38 39 30 31 36 39 32
// Reset code PIN
00 2C 03 01 00
Using the RESET RETRY COUNTER command (INS = 0x2C) with P1 = 0x03 means that you want to reset the retry counter without setting new reference data (i.e. a new PIN). If you want to set new reference data (a new PIN) when resetting the retry counter, you could try (depending on what your card supports)
P1 = 0x00 (for the format you tried):
00 2C 00 01 0C 36 35 32 34 39 38 37 36 31 32 33 34
P1 = 0x02 (only the new reference data is sent):
00 2C 02 01 04 31 32 33 34
Your length should be 0x10. Plz refer below example:
A0 2C 00 01 10 3636303535333132 31323334 FFFFFFFF
Command : A0 2C 00 01 10
Input Data : 36 36 30 35 35 33 31 32 31 32 33 34 FF FF FF FF
Output Data : none
Status : 90 00
here 3636303535333132 is unblock key and 31323334 is new pin

Extracting data from a .DLL: unknown file offsets

I'm currently trying to extract some data from a .DLL library - I've figured out the file structure (there are 1039 data blocks compressed with zlib, starting at offset 0x3c00, the last one being the fat table). The fat table itself is divided into 1038 "blocks" (8 bytes + a base64 encoded string - the filename). As far as I've seen, byte 5 is the length of the filename.
My problem is that I can't seem to understand what bytes 1-4 are used for: my first guess was that they were an offset to locate the file block inside the .DLL (mainly because the values are increasing throughout the table), but for instance, in this case, the first "block" is:
Supposed offset: 2E 78 00 00
Filename length: 30 00 00 00
Base64 encoded filename: 59 6D 46 30 64 47 78 6C 58 32 6C 75 64 47 56 79 5A 6D 46 6A 5A 56 78 42 59 33 52 70 64 6D 56 51 5A 58 4A 72 63 31 4E 6F 62 33 63 75 59 77 3D 3D
yet, as I said earlier, the block itself is at 0x3c00, so things don't match. Same goes for the second block (starting at 0x3f0b, whereas the table supposed offset is 0x167e)
Any ideas?
Answering my own question lol
Anyway, those numbers are the actual offsets of the file blocks, except for the fact that the first one starts from some random number instead than from the actual location of the first block. Aside from that, though, differences between each couple of offsets do match the length of the corresponding block.

How to return LOW VALUES HEX '00' in sql statement?

I need to write into file (in the middle of the string) a LOW VALUES HEX'00'.
I could do it using package utl_file using the next code utl_file.put_raw(v_file, hextoraw('000000')). But I may do it only in the beginning and end of file, not in the middle of string.
So, my question is: how to write a LOW VALUES HEX'00' in the select statement.
I tried some variants like
Select ‘blablabla’ Q, hextoraw('000000'), ‘blablabla’ w from dual;
save it into .dat file, then open it in hex-editor but the result was different when using utl_file.
Could anybody (if it's possible) write a correct sql statement.
If I understand you correctly, you're trying to add a null/binary zero to your output. If so, you can just use chr(0).
eg. utl_file.putf(l_file, 'This is a binary zero' || chr(0));
Looking at that in a hex editor will show you:
00000000 54 68 69 73 20 69 73 20 61 20 62 69 6e 61 72 79 |This is a binary|
00000010 20 7a 65 72 6f 00 0a | zero..|