Oracle.ManagedDataAccess.Client.OracleException: ORA-01017: invalid username/password; logon denied - odp.net

I'm experiencing an issue by using ODP.NET version 19.3 (latest Oracle Managed DataAccess available) to connect to Oracle 11g using a Secure External Password Store (SEPS), where the Oracle login credentials are stored in a client-side Oracle wallet.
If I switch to a classic login/password connection string, there are no problems with database connection and commands.
Furthermore, I report that all other .net applications that are using classic ODP 11 (un-managed DataAccess) have no problems to connect in SEPS mode; in fact, I'm building the first case of using ODP.NET 19.3 with SEPS and Oracle wallet mode.
In order I have:
created a wallet (with mkstore utils) for our application and put it in a directory of server: ie
C:\users\%APP_POOL_ID%\wallet
created (for all applications) a sqlnet.ora file and put it in Oracle Home directory of server: ie
%ORACLE_HOME%\Network\Admin
with the following content:
SQLNET.AUTHENTICATION_SERVICES=(NTS)
NAMES.DIRECTORY_PATH=(TNSNAMES,LDAP,EZCONNECT,HOSTNAME)
names.ldap_conn_timeout = 1
WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=c:\users\%APP_POOL_ID%\wallet)))
SQLNET.WALLET_OVERRIDE = TRUE
DIAG_ADR_ENABLED = off
using the following ConnectionString:
Data Source=DS_NAME_1; User ID=[USER_ID_1];Proxy User Id=[USER_ID_1];
Note: User ID and Proxy User Id are specified with square brackets inside connection string.
This is the exception with stack trace we obtain:
Oracle.ManagedDataAccess.Client.OracleException: ORA-01017: invalid username/password; logon denied
at OracleInternal.ConnectionPool.PoolManager`3.Get(ConnectionString csWithDiffOrNewPwd, Boolean bGetForApp, OracleConnection connRefForCriteria, String affinityInstanceName, Boolean bForceMatch)
at OracleInternal.ConnectionPool.OraclePoolManager.Get(ConnectionString csWithNewPassword, Boolean bGetForApp, OracleConnection connRefForCriteria, String affinityInstanceName, Boolean bForceMatch)
at OracleInternal.ConnectionPool.OracleConnectionDispenser`3.Get(ConnectionString cs, PM conPM, ConnectionString pmCS, SecureString securedPassword, SecureString securedProxyPassword, OracleConnection connRefForCriteria)
at Oracle.ManagedDataAccess.Client.OracleConnection.Open()
and this is a portion of the trace I obtain enabling by <oracle.manageddataaccess.client> config section:
>[...]
>(PRI) (TUN) OracleTuningAgent::Unegister(): Unegistered pool Data Source=DS_NAME_1; User ID=;Proxy User Id=[USER_ID_1];
>[...]
Additionally, in another trace file, it's possible to see the WriteOAuthMessage passes BLANK password to DB:
>(PRI) (TTC) (EXT) TTCAuthenticate.ReadOSessKeyResponse()
>(PRI) (SVC) (ENT) OracleConnectionImpl.CheckForAnyErrorFromDB()
>(PRI) (SVC) (EXT) OracleConnectionImpl.CheckForAnyErrorFromDB()
>(PRI) (TTC) (ENT) TTCAuthenticate.WriteOAuthMessage()
>(PRI) (TTC) (ENT) TTCAuthenticate.WriteOAuthMessage()
>(PRI) (TTC) (ENT) TTCFunction.WriteFunctionHeader()
>(PRI) (TTC) (ENT) TTCMessage.WriteTTCCode()
>(PRI) (TTC) (EXT) TTCMessage.WriteTTCCode()
>(PRI) (TTC) (EXT) TTCFunction.WriteFunctionHeader()
>(PRI) (TTC) (EXT) TTCAuthenticate.WriteOAuthMessage()
>(PRI) (TTC) (EXT) TTCAuthenticate.WriteOAuthMessage()
>(NET) (SND) 00 00 03 80 06 00 00 00 |........|
>(NET) (SND) 00 00 |.. |
>(NET) (SND) 03 73 00 01 01 06 02 01 |.s......|
>[..user removed..]
>(NET) (SND) 41 55 54 48 5F 50 41 53 |AUTH_PAS|
>(NET) (SND) 53 57 4F 52 44 01 40 40 |SWORD.##|
>(NET) (SND) 00 00 00 00 00 00 00 00 |........|
>(NET) (SND) 00 00 00 00 00 00 00 00 |........|
>(NET) (SND) 00 00 00 00 00 00 00 00 |........|
>(NET) (SND) 00 00 00 00 00 00 00 00 |........|
>(NET) (SND) 00 00 00 00 00 00 00 00 |........|
>(NET) (SND) 00 00 00 00 00 00 00 00 |........|
>(NET) (SND) 00 00 00 00 00 00 00 00 |........|
>(NET) (SND) 00 00 00 00 00 00 00 00 |........|
I excluded the problem is a really wrong login/password by logging in the server machine with user credentials and test connection to the database via sqlplus command.
Can anyone help me?
Many thanks!

Related

Why do I get 2 interface descriptors before the endpoint descriptor after a GET_DESCRIPTOR USB request on QEMU?

I am writing a small x86-64 hobby OS I boot with UEFI. I am currently writing a driver for the Intel's xHC. I am at a point where I can address USB devices and have a Transfer Ring allocated for Endpoint 0 of each device. I then use a GET_DESCRIPTOR request to get the configuration descriptor of each device. I ask QEMU to emulate a USB keyboard and a USB mouse. I thus get 2 different descriptors which are the following:
user#user-System-Product-Name:~$ hexdump -C result.bin
00000000 09 02 22 00 01 01 06 a0 32 09 04 00 00 01 03 01 |..".....2.......|
00000010 02 00 09 21 01 00 00 01 22 34 00 07 05 81 03 04 |...!...."4......|
00000020 00 07 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00001000 09 02 22 00 01 01 08 a0 32 09 04 00 00 01 03 01 |..".....2.......|
00001010 01 00 09 21 11 01 00 01 22 3f 00 07 05 81 03 08 |...!...."?......|
00001020 00 07 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00001030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00002000
Basically, I ask GDB to output the content of RAM where I placed the descriptors in the file result.bin. Then I hexdump the content of result.bin in the console. Here you can see the configuration of the USB mouse first. Then, 1 page later, the configuration of the USB keyboard.
The configuration descriptor of the mouse is 09 02 22 00 01 01 06 a0 32. It is followed with 2 interface descriptors: 09 04 00 00 01 03 01 02 00 and 09 21 01 00 00 01 22 34 00. Those are followed by one endpoint descriptor: 07 05 81 03 08 00 07.
In the first interface descriptor for both the mouse and the keyboard, it is indicated that there is one endpoint descriptor (indicated by the bNumEndpoints field of the descriptor which is byte number 4 indexed from 0). I would expect that the following descriptor is the endpoint descriptor. Instead, I get a second interface descriptor (indicated by the fact that it has a length of 9 bytes instead of 7 and by the values of the different fields).
As stated on https://wiki.osdev.org/Universal_Serial_Bus:
Each CONFIGURATION descriptor has at least one INTERFACE descriptor, and each INTERFACE descriptor may have up to 15 ENDPOINT descriptors. When the host requests a certain CONFIGURATION descriptor, the device returns the CONFIGURATION descriptor followed immediately by the first INTERFACE descriptor, followed immediately by all of the ENDPOINT descriptors for endpoints that the interface defines (which may be none). This is followed immediately by the next INTERFACE descriptor if one exists, and then by its ENDPOINT descriptors if applicable. This pattern continues until all the information within the scope of the specific configuration is transfered.
Why do I get 2 interface descriptors followed by the endpoint descriptor in my case? Is it a QEMU bug or is it something I should expect?
You're not accurately describing the binary data that I see in your shell output.
The dump starts with 9-byte descriptor of type 2 so that is your device descriptor:
09 02 22 00 01 01 06 a0 32
Then there is a 9-byte descriptor of type 4, so that is an interface, and it has bNumEndpoints set to 1:
09 04 00 00 01 03 01 02 00
Then there is another 9-byte descriptor of type 0x21. I don't recognize that code off the top of my head, but it probably is something standard:
09 21 01 00 00 01 22 34 00
Then we have a 7-byte descriptor of type 5, so that is an endpoint descriptor:
07 05 81 03 04 00 07

openssl s_client only works with -tls1 switch to connect

We have some legacy systems that are still only support tls1 (there are plans to move off this soon, but not soon enough).
In order to connect to our new system, I have enabled tls1 connections. However, when i run a command like:
openssl s_client -connect host:port i get a failure to connect. When adding the -debug switch to see why i see the following:
CONNECTED(00000004)
write to 0x8000d02160 [0x8000d64000] (139 bytes => 139 (0x8B))
0000 - 80 89 01 03 01 00 60 00-00 00 20 00 00 39 00 00 ......`... ..9..
0010 - 38 00 00 35 00 00 88 00-00 87 00 00 84 00 00 16 8..5............
0020 - 00 00 13 00 00 0a 07 00-c0 00 00 33 00 00 32 00 ...........3..2.
0030 - 00 2f 00 00 45 00 00 44-00 00 41 03 00 80 00 00 ./..E..D..A.....
0040 - 05 00 00 04 01 00 80 00-00 15 00 00 12 00 00 09 ................
0050 - 06 00 40 00 00 14 00 00-11 00 00 08 00 00 06 04 ..#.............
0060 - 00 80 00 00 03 02 00 80-00 00 ff 29 c2 dd fb 71 ...........)...q
0070 - 5b 62 90 9e 5b b7 e7 5f-2e 67 9f a2 d2 01 eb bd [b..[.._.g......
0080 - 7f 16 28 2a 66 eb 37 78-92 d7 80 ..(*f.7x...
read from 0x8000d02160 [0x8000d6a000] (7 bytes => 0 (0x0))
59659:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/home/src/secure/lib/libssl/../../../crypto/openssl/ssl/s23_lib.c:182:
but, when i add the -tls1 switch i get connected as expected. I am confused why this is happening. Shouldn't openssl try all acceptable methods when connecting ?
0000 - 80 89 01 03 01 ...
This is a SSLv2 compatible ClientHello (0x01) announcing support for TLS version 1.0 (0x0301). My guess is that the server does not understand a SSLv2 compatible handshake (long obsolete) but expects a proper TLS handshake which you can get with the -tls1 option.
Given that your openssl s_client does this SSLv2 compatible handshake by default and that it only supports TLS 1.0 and not better (since this is the largest it is announcing by default) suggests that you are using an old and unsupported version of OpenSSL, i.e. 0.9.8 or 1.0.0.
Shouldn't openssl try all acceptable methods when connecting ?
That's not how SSL/TLS works. There is not trying of various methods. Instead the client announces the best it can do (TLS 1.0 in your case) and the server picks a protocol version equal or lower to the version supported by the client, in the hope that the client will accept this.

How can I check INITIALIZE UPDATE and EXTERNAL AUTHENTICATE correctness?

I sent 80 50 00 00 08 00 00 00 00 00 00 00 00 [INITILIZE UPDATE Command] via opensc-tool to my java card and received 00 00 11 60 01 00 8A 79 0A F9 FF 02 00 11 79 11 36 5D 71 00 A5 A5 EC 63 BB DC 05 CC [Init Response] as its response from the card.
As you see:
In the command,I send 00 00 00 00 00 00 00 00 as Host Challenge, And in the response :
00 00 11 60 01 00 8A 79 0A F9 = Key diversification data
FF 02 = Key information
00 11 79 11 36 5D 71 00 = Card challenge
A5 A5 EC 63 BB DC 05 CC = Card cryptogram
Now I want to check myself,if the card cryptogram is OK or not. How I can do it? for example I encrypt 00 00 00 00 00 00 00 00 in this site under a 3DES cryptography algorithm [with keys of my card = 4041...4F], but the output is not equal with card cryptogram that I wrote above. Why?
And the next question is, if I want to send EXTERNAL AUTHENTICATION command to the card, what is its data field (after the above INITILIZE UPDATE)?
Update:
This is GPJ output :
C:\Users\ghasemi\Desktop\gpj-20120310>GPJ
C:\Users\ghasemi\Desktop\gpj-20120310>java -jar gpj.jar
Found terminals: [PC/SC terminal ACS CCID USB Reader 0]
Found card in terminal: ACS CCID USB Reader 0
ATR: 3B 68 00 00 00 73 C8 40 12 00 90 00
.
.
.
DEBUG: Command APDU: 00 A4 04 00 08 A0 00 00 00 03 00 00 00
DEBUG: Response APDU: 6F 10 84 08 A0 00 00 00 03 00 00 00 A5 04 9F 65 01 FF 90 00
Successfully selected Security Domain OP201a A0 00 00 00 03 00 00 00
DEBUG: Command APDU: 80 50 00 00 08 7F 41 A9 E7 19 37 83 FA
DEBUG: Response APDU: 00 00 11 60 01 00 8A 79 0A F9 FF 02 00 1B 9B 95 B9 5E 5E BC BA 51 34 84 D9 C1 B9 6E 90 00
DEBUG: Command APDU: 84 82 00 00 10 13 3B 4E C5 2C 9E D8 24 50 71 83 3A 78 AE 75 23
DEBUG: Response APDU: 90 00
DEBUG: Command APDU: 84 82 00 00 08 13 3B 4E C5 2C 9E D8 24
DEBUG: Response APDU: 90 00
C:\Users\ghasemi\Desktop\gpj-20120310>
So :
Host_Challenge :: 7F41A9E7193783FA
Diversification_Data :: 0000116001008A790AF9
Key_Information :: FF02
Sequence_Counter :: 001B
Card_Challenge :: 9B95B95E5EBC
Card_Cryptogram :: BA513484D9C1B96E
Host_Cryptogram[16,24] = 13 3B 4E C5 2C 9E D8 24
Now,lets make our Host_Cryptogram Manually :
Derivation_data=derivation_const_ENC|sequence_counter|0000 0000 0000 0000 0000 0000
Derivation_Data = 0182001B000000000000000000000000
k_ENC :: 404142434445464748494A4B4C4D4E4F
IV = 00 00 00 00 00 00 00 00
S_ENC = encrypt(TDES_CBC, K_ENC, IV, derivation_data)
So :
I used http://tripledes.online-domain-tools.com/ and its output for above values was :
S_ENC = 448b0a5967ca246d058703ff0c694f15
And :
Padding_DES = 80 00 00 00 00 00 00 00
Host_auth_data = sequence_counter | card_challenge | host_challenge | padding_DES
IV = Card_Cryptogram :: BA513484D9C1B96E
host_cryptogram = encrypt(TDES_CBC, S_ENC, IV, host_auth_data)
So :
Host_Authentication_Data : 001B9B95B95E5EBC7F41A9E7193783FA8000000000000000
Again, I used http://tripledes.online-domain-tools.com/
and :
Host_Cryptogram : 3587b531db71ac52392493c08cff189ce7b9061029c63b62
So :
Host_Cryptogram[16,24] = e7b9061029c63b62
Why these two way [manually and GPJ output] give us two host cryptogram?
From the INITIALIZE UPDATE command you send, you get
host_challenge = 00 00 00 00 00 00 00 00
In response to the INITIALIZE UPDATE command, you get
diversification_data = 00 00 11 60 01 00 8A 79 0A F9
key_information = FF 02
sequence_counter = 00 11
card_challenge = 79 11 36 5D 71 00
card_cryptogram = A5 A5 EC 63 BB DC 05 CC
The key information indicates SCP02 (02). The key diversification data may be used to derive the card-specific K_ENC. Lets assume we have a K_ENC like this
K_ENC = 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
We can then derive the session encryption key like this
derivation_const_ENC = 01 82
derivation_data = derivation_const_ENC | sequence_counter | 00 00 00 00 00 00 00 00 00 00 00 00
IV = 00 00 00 00 00 00 00 00
S_ENC = encrypt(TDES_CBC, K_ENC, IV, derivation_data)
Next, we can assemble the authentication data used to calculate the host cryptogram:
padding_DES = 80 00 00 00 00 00 00 00
host_auth_data = sequence_counter | card_challenge | host_challenge | padding_DES
Then we can use the session encryption key to encrypt the authentication data:
IV = 00 00 00 00 00 00 00 00
host_cryptogram = encrypt(TDES_CBC, S_ENC, IV, host_auth_data)
The last 8 bytes of the encrypted authentication data are the actual host cryptogram that we would send to the card:
EXTERNAL_AUTHENTICATE_data = host_cryptogram[16, 24]
Now we can assemble the EXTERNAL AUTHENTICATE command:
EXTERNAL_AUTHENTICATE = 84 82 03 00 08 | EXTERNAL_AUTHENTICATE_data
We can then calculate the S_MAC key (analoguous to getting the S_ENC above) and the MAC over that command and append it to the command data to get the full EXTERNAL AUTHENTICATE command that can be sent to the card:
EXTERNAL_AUTHENTICATE = 84 82 03 00 10 | EXTERNAL_AUTHENTICATE_data | MAC
Update
Using http://tripledes.online-domain-tools.com/ to reproduce the results of GPJ
Your K_ENC is 404142434445464748494A4B4C4D4E4F. The online tools does not properly support 2-key-3DES, so you have to convert the key into its 3-key form first:
K_ENC = 404142434445464748494A4B4C4D4E4F4041424344454647
Use this key and a zero IV to encrypt the derivation data (0182001B000000000000000000000000). You get
S_ENC = fb063cc2e17b979b10e22f82110234b4
In 3-key notation, this is
S_ENC = fb063cc2e17b979b10e22f82110234b4fb063cc2e17b979b
Use this key and a zero IV to encrypt the host authentication data (001b9b95b95e5ebc7f41a9e7193783fa8000000000000000):
HOST_CRYPTOGRAM = 773e790c91acce3167d99f92c60e2afd133b4ec52c9ed824

understand hexedit of an elf

Consider the following hexedit display of an ELF file.
00000000 7F 45 4C 46 01 01 01 00 00 00 00 00 .ELF........
0000000C 00 00 00 00 02 00 03 00 01 00 00 00 ............
00000018 30 83 04 08 34 00 00 00 50 14 00 00 0...4...P...
00000024 00 00 00 00 34 00 20 00 08 00 28 00 ....4. ...(.
00000030 24 00 21 00 06 00 00 00 34 00 00 00 $.!.....4...
0000003C 34 80 04 08 34 80 04 08 00 01 00 00 4...4.......
00000048 00 01 00 00 05 00 00 00 04 00 00 00 ............
How many section headers does it have?
Is it an object file or an executable file?
How many program headers does it have?
If there are any program headers, what does the first program header do?
If there are any section headers, at what offset is the section header table?
Strange, this hexdump looks like your homework to me...
There are 36 section headers.
It is an executable.
It has 8 program headers.
As you can tell by the first word (offset 0x34: 0x0006) in the first program header, it is of type PT_PHDR, which just informs about the characteristics of the program header table itself.
The section header table begins at byte 5200 (which is 0x1450 in hex).
How do I know this stuff? By dumping the hex into a binary and reading it with readelf -a (because I am lazy). Except for question no. 4, which I had to figure out manually by reading man 5 elf.

Writing to USB HID device

I have a problem writing to HID device;
Below are two logs made with Snoopy.
The first one is made using original demo SW of device manufacturer and the second is my SW log.
My software doesn't work with this device but works with another HID device.
Original software:
9 ??? down n/a 27.868 BULK_OR_INTERRUPT_TRANSFER 06 16 19 17 00 00 00 00
URB Header (length: 72)
SequenceNumber: 9
Function: 0009 (BULK_OR_INTERRUPT_TRANSFER)
TransferFlags: 0x00000002
TransferBuffer: 0x00000040 (64) length
0000: 06 16 19 17 00 00 00 00 00 00 00 00 00 00 00 00
0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
9 ??? up n/a 27.874 BULK_OR_INTERRUPT_TRANSFER - 0x00000000
URB Header (length: 72)
SequenceNumber: 9
Function: 0009 (BULK_OR_INTERRUPT_TRANSFER)
TransferFlags: 0x00000002
No TransferBuffer
My software:
9 out down n/a 22.224 CLASS_INTERFACE 06 16 19 17 00 00 00 00
URB Header (length: 80)
SequenceNumber: 9
Function: 001b (CLASS_INTERFACE)
PipeHandle: 00000000
SetupPacket:
0000: 22 09 00 02 00 00 00 00
bmRequestType: 22
DIR: Host-To-Device
TYPE: Class
RECIPIENT: Endpoint
bRequest: 09
TransferBuffer: 0x00000040 (64) length
0000: 06 16 19 17 00 00 00 00 00 00 00 00 00 00 00 00
0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
9 out up n/a 22.227 CONTROL_TRANSFER - 0x00000000
URB Header (length: 80)
SequenceNumber: 9
Function: 0008 (CONTROL_TRANSFER)
PipeHandle: 877af60c
SetupPacket:
0000: 21 09 00 02 00 00 40 00
bmRequestType: 21
DIR: Host-To-Device
TYPE: Class
RECIPIENT: Interface
bRequest: 09
No TransferBuffer
Code used to send the data looks like this:
hiddata.ReportID := 0;
hiddata.Data[0] := 6;
hiddata.Data[1] := $16;
hiddata.Data[2] := $19;
hiddata.Data[3] := $17;
for I := 4 to 64 do
hiddata.Data[I] := $0;
b := HidD_SetOutputReport(HidHandle, #hiddata, 65);
HidHandle is correct and variable "b" is True after execution.
Any ideas?
What I'm doing wrong?
Original:
Function: 0009 (BULK_OR_INTERRUPT_TRANSFER)
Your program:
Function: 0008 (CONTROL_TRANSFER)
HID spec allows both IIRC, but it seems your Hardware is picky and only works when using the interrupt endpoint.