The supplied credentials '{0'} cannot be used to sign request - azure-storage

While using Shared Access Signature to fetch Table data using StorageClient Library 2.0, I am consistently getting the error "The supplied credentials '{0'} cannot be used to sign request". From GitHub what I can understand is that the error is due to sasCredentials.CanSignRequest returning false...but as per code in GitHub there are no scenarios where it is supposed to return true...is it a bug...or am I doing something wrong here?
StorageCredentials sasCredentials = new StorageCredentialsSharedAccessSignature(sharedAccessSignature);
CloudTableClient ctc = new CloudTableClient(tableEndpoint, sasCredentials);

StorageCredentialsSharedAccessSignature type does not exist in Azure Storage Client Library 2.0. Hence, I am assuming you are still using an older version, most probably 1.7. As also described in the Introducing Table SAS (Shared Access Signature), Queue SAS and update to Blob SAS blog post, support for Table SAS was added in newer releases of the Azure Storage Client Library.
I strongly recommend upgrading to 2.0, which also has many other improvements in addition to the functionality you are looking for. For more details, please refer to Introducing Windows Azure Storage Client Library 2.0 for .NET and Windows Runtime.

Related

System.Data.SQLite.Core Cannot use "Password" connection string property

I am writing a .NET Core 3.1 application that depends on another library (Serilog.Sinks.SQLite) which is attempting to store log data to an SQLite database. Unfortunately, Serilog.Sinks.SQLite does not support passing in a password to System.Data.SQLite via the SQLiteConnectionStringBuilder.Password property (which you provide to the SQLiteConnection constructor) and I would really like that functionality.
The code that Serilog.Sinks.SQLite uses to connect to the database is as follows:
private SQLiteConnection GetSqLiteConnection()
{
var sqlConString = new SQLiteConnectionStringBuilder
{
DataSource = _databasePath,
JournalMode = SQLiteJournalModeEnum.Memory,
SyncMode = SynchronizationModes.Normal,
CacheSize = 500,
PageSize = (int)MaxSupportedPageSize,
MaxPageCount = (int)(_maxDatabaseSize * BytesPerMb / MaxSupportedPageSize)
}.ConnectionString;
var sqLiteConnection = new SQLiteConnection(sqlConString);
sqLiteConnection.Open();
return sqLiteConnection;
}
There are a number of similar posts on StackOverflow about encryption with SQLite stating very convincingly that encryption / password protection of the database is indeed supported by System.Data.SQLite. However, that is not matching my experience.
I grabbed a copy of the Serilog.Sinks.SQLite source in an attempt to prototype a modification to it to support specifying the password. This seems like it should be easy enough to accomplish with the following addition to the above code (specifying the Password property in the connection string):
MaxPageCount = (int)(_maxDatabaseSize * BytesPerMb / MaxSupportedPageSize),
Password = "mypasswordhere"
}.ConnectionString;
Unfortunately, this does not work and results in an exception being thrown on startup with the following message:
Exception has occurred: CLR/System.Data.SQLite.SQLiteException An
unhandled exception of type 'System.Data.SQLite.SQLiteException'
occurred in LoggingWebApi.dll: 'SQL logic error Cannot use "Password"
connection string property: library was not built with encryption
support, please see "https://www.sqlite.org/see" for more information'
What is confusing me the most here are the number of other posts I've found claiming that System.Data.SQLite supports this password and the fact that a .NET Framework 4.6 application from another team in my company is using System.Data.SQLite.dll (though an older version 1.0.111.0) and this Password behavior works fine for them.
My code is targeting netcoreap3.1 and my dependency of Serilog.Sinks.SQLite is targeting netstandard2.0 so that is one obvious difference I see. In my modified version of the Serilog.Sinks.SQLite code, I am referencing System.Data.SQLite.Core as follows (added via otnet add package System.Data.SQLite.Core --version 1.0.113.1):
<PackageReference Include="System.Data.SQLite.Core" Version="1.0.113.1" />
Is there something with .NET Core 3.1 or .NET Standard 2.0 that causes System.Data.SQLite.Core to not support the Password connection string property like everyone else seems to think it supports?
I thought maybe my issue was running on Linux so I tried running on my Windows but it produced the same error.
I did find another post referencing a potentially helpful approach but this sample is not using System.Data.SQLite.Core and instead is using SQLitePCLRaw.bundle_e_sqlcipher and I would prefer to avoid rewriting all of the Serilog.Sinks.SQLite code to use a different SQLite client: https://github.com/paragpkulkarni/SQLiteEncryptionUsingEFCore
It seems that System.Data.SQLite has dropped support for encryption as of version 1.0.113.1 which explains why I haven't been able to get it working:
https://system.data.sqlite.org/index.html/tktview?name=9c330a3e03
This is also mentioned on their News page though it was not clear to me that the Password support was part of the "legacy CryptoAPI Codec":
https://system.data.sqlite.org/index.html/doc/trunk/www/news.wiki
The other team I referred to is using an older version 1.0.111.0 so if you require this support, I guess just don't upgrade...

How to access SAP OData messages in Kapsel offline app?

We are developing an SAP Fiori App to be used on the Launchpad and as an offline-enabled hybrid app as well using the SAP SDK and its Kapsel Plug Ins. One issue we are facing at the moment is the ODATA message handling.
On the Gateway, we are using the Message Manager to add additional information to the response
" ABAP snippet, random Gateway entity method
[...]
DATA(lo_message_container) = me->mo_context->get_message_container( ).
lo_message_container->add_message(
iv_msg_type = /iwbep/cl_cos_logger=>warning
iv_msg_number = '123'
iv_msg_id = 'ZFOO'
).
" optional, only used for 'true' errors
RAISE EXCEPTION TYPE /iwbep/cx_mgw_busi_exception
EXPORTING
message_container = lo_message_container.
In the Fiori app, we can directly access those data from the message manager. The data can be applied to a MessageView control.
// Fiori part (Desktop, online)
var aMessageData = sap.ui.getCore().getMessageManager().getMessageModel().getData();
However, our offline app always has an empty message model. After a sync or flush, the message model is always empty - even after triggering message generating methods in the backend.
The only way to get some kind of messages is to raise a /iwbep/cx_mgw_busi_exception and pass the message container. The messages can be found, in an unparsed state, in the /ErrorArchive entity and be read for further use.
// Hybrid App part, offline, after sync and flush
this.getModel().read("/ErrorArchive", { success: .... })
This approach limits us to negative, "exception worthy", messages only. We also have to code some parts of our app twice (Desktop vs. Offlne App).
So: Is there a "proper" to access those messages after an offline sync and flush?
For analyzing the issue, you might use the tool ILOData as seen in this blog:
Step by Step with the SAP Cloud Platform SDK for Android — Part 6c — Using ILOData
Note, ILOData is part of the Kapsel SDK, so while the blog above was part of a series on the SAP Cloud Platform SDK for Android, it also applies to Kapsel apps.
ILOData is a command line based tool that lets you execute OData requests and queries against an offline store.
It functions as an offline OData client, without the need for an application.
Therefore, it’s a good tool to use to test data from the backend system, as well as verify app behavior.
If a client has a problem with some entries on their device, the offline store from the device can be retrieved using the sendStore method and then ILOData can be used to query the database.
This blog about Kapsel Offline OData plugin might also be helpful.

Old ABAP code still active for PyRFC even after TR was imported. Why?

I changed an ABAP RFC module in a SAP system X and transported the changes the Y. Now when I call the RFC SAP stills executes the old code.
I compared both versions from X and Y with a diff tool and found no differences, so it looks like the changes where transported. Is there a special step needed to activate my ABAP RFC code?
I use PyRFC as a client library.
We had the same problem with one of our RFC FMs. Reason was that the connection remained open once it was established. In this case, the binaries are not refreshed in the RFC context. Simply restart the connection and everything should work as desired.
This is a known issue: https://github.com/SAP/PyRFC/issues/89
Quoting the issue:
After the Python script ended, the connection should be automatically
closed and SAP NW RFC SDK initialised. Here what happens under the
hub.
Python interpreters and PyRFC instances share the same SAP NW RFC SDK
lib instance and when the remote enabled function module (RFM) is
called for the 1st time, the RFM metadata are cached inside SAP NW RFC
SDK. When the 2nd call of the same RFM requested from Python/PyRFC,
the SAP NW RFC SDK returns the metadata from cache, rather than
reading again from ABAP system, saving one Python/ABAP roundtrip and
some performance, especially in case of complex RFMs. If the RFM
signature changed in the meantime, the cached RFM metadata are not
changed and Python "sees" the old ABAP code.
I hope a developer friendly solution will get used for the future.
There is no need for activating anything. It should be fine as you transport it.
You can try these;
Transport everything again.(Including other tasks of the same request)
Check the destination field in your call to see if you are calling the correct system
Clear the buffer => TCode /$sync

Aerospike: Migrating from Python client to Go client

I was using Aerospike since 3.4 and Python client 1.0.31.
Currently upgraded to Aerospike 3.6.3 and Python client 1.0.50.
Since Python client doesn't have Async writes feature, I am planning to go with Golang. Also read that Go fits well with Aerospike (http://www.aerospike.com/blog/go-aerospike-a-perfect-match/)
I would like to know what are the consequence I will face on changing the client and how to handle them.
One of the issue I see is serialization. As I was using python client since Aerospike 3.4, How to handle older serialized data like float values. I Need not worry on new data as recent releases support floats natively.
Thanks in Advance.
Well, "Python client doesn't have Async" needs to come with a big yet. The C client 4.0.0 provides async operations. The current work being done in the Python client is compatibility with Python >= 3.4. Async is something that is planned.
The main thing to consider when moving from one language client to another, or when combining different SDKs is how to handle 'unsupported' types. You'll have to review your data for where it will contain serialized data in as_bytes, encoded as AS_BYTES_PYTHON. See the 'Serialization' section in the Python API doc. You want to come up with a common custom serialization scheme to allow your Go client to read that data.

How do multiple versions of a REST API share the same data model?

There is a ton of documentation on academic theory and best practices on how to manage versioning for RESTful Web Services, however I have not seen much discussion on how multiple REST APIs interact with data.
I'd like to see various architectural strategies or documentation on how to handle hosting multiple versions of your app that rely on the same data pool.
For instance, suppose you make a database level destructive change to a database table that causes you to have to increment your major API version to v2.
Now at any given time, users could be interacting with the v1 web service and the v2 web service at the same time and creating data that is visible and editable by both services. How should this be handled?
Most of changes introduced to API affect the content of the response, till changes introduced are incremental this is not a very big problem (note: you should never expose the exact DB model directly to the clients).
When you make a destructive/significant change to DB model and new API version of API is introduced, there are two options:
Turn the previous version off, filter out all queries to reply with 301 and new location.
If 1. is impossible to need to maintain both previous and current version of the API. Since this might time and money consuming it should be done only for some time and finally previous version should be turned off.
What with DB model? When two versions of API are active at the same time I'd try to keep the DB model as consistent as possible - having in mind that running two versions at the same time is just temporary. But as I wrote earlier, DB model should never be exposed directly to the clients - this may help you to avoid a lot of problems.
I have given this a little thought...
One solution may be this:
Just because the v1 API should not change, it doesn't mean the underlying implementation cannot change. You can modify the v1 implementation code to set a default value, omit the saving of a field, return an unchecked exception, or do some kind of computational logic that helps the v1 API to be compatible with the shared datasource. Then, implement a better, cleaner, more idealistic implementation in v2.
when you are going to change any thing in your API structure that can change the response, you most increase you'r API Version.
for example you have this request and response:
request post: a, b, c, d
res: {a,b,c+d}
and your are going to add 'e' in your response fetched from database.
if you don't have any change based on 'e' in current client versions, you can add it on your current API version.
but if you'r new changes are going to change last responses, for example:
res: {a+e, b, c+d}
you most increase API number to prevent crashing.
changing in the request input's are the same.