Objective C Asynchronous to Synchronous Database Access [closed] - sql

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have found an open source Objective-C library for connecting to Microsoft SQL Server databases.
The problem is, that I would like to use it synchronously.
This is how my Swift project uses the library.
func execute(query: String) {
self.client.connect(host + ":" + port, username: username, password: password, database: database) { (connected) -> Void in
if connected {
self.client.execute(query, completion: { (results: Array<AnyObject>!) -> Void in
self.result = results[0] as! Array<AnyObject>
})
}
}
}
The block passed is execute asynchronously by the library. Is there a way to make the code execute synchronously so that whenever I call execute, that thread waits for the library work to complete before execute returns?

So, I have some experience with the github library you're using.
It seems most likely that you might want to make this call synchronously because of some of the problems this library has (you can't make multiple queries at once, and can't open multiple connections to the server because the library does all of its SQL work through a singleton). To resolve some of these problems, I highly recommend you check out the SQLConnect library that I wrote (after spending some time trying to use Martin's library that you're using). My library steps away from the singleton approach and you can make as many connections on as many different threads as you want.
With that said... you can make this library (and mine as well) perform synchronously.
If you notice in the SQLClient.h file, the SQLClient object specifies a workerQueue and a callbackQueue. Martin has set the callbackQueue to be on whatever queue the singleton is first instantiated on, and the workerQueue is a queue he specifies. However, these are public properties which can be set perfectly fine.
If you really want the query to perform synchronously, just set the workerQueue and callbackQueue to operate on the current queue.
SQLClient.sharedInstance.workerQueue = NSOperationQueue.currentQueue()
SQLClient.sharedInstance.callbackQueue = NSOperationQueue.currentQueue()
And then perform your query.
All of the code will execute on the same NSOperationQueue, and as such, will by synchronous.
Of course, you can do the same thing using my SQLConnect library, as the SQLConnection object similarly specifies a workerQueue and callbackQueue which you can specify to any queue you want.
With all of this said, I highly, highly, highly recommend that you allow the operation to remain asynchronous and come up with some other solution to whatever problem makes you think it should be performing synchronously. And even if you still think it should by synchronous, please be sure it's not blocking the UI thread.

Related

How do you handle database errors in Go without getting coupled to the SQL driver?

A common way to interact with a SQL database in Go is to use the built in database/sql interface. Many different third-party packages implement this interface in a way that is specific to some particular database without exposing that work to you as a consumer, e.g. Postgres driver, MySQL driver, etc.
However, database/sql doesn't provide any specific error types, leaving it up to the driver instead. This presents a problem: any error handling you do for these specific errors beyond nil checks now works off of the assumption of a particular driver. If you decide to change drivers later, all of the error handling code must be modified. If you want to support multiple drivers, you need to write additional checks for that driver too.
This seemingly undermines the primary benefit of using interfaces: portability with an agreed-upon contract.
Here's an example to illustrate this problem using the jackc/pgx/v4/stdlib driver and suite of helper packages:
import (
"database/sql"
"errors"
"github.com/jackc/pgconn"
"github.com/jackc/pgerrcode"
)
// Omitted code for the sake of simplification, err comes from database/sql
if err != nil {
var pgerr *pgconn.PgError
if errors.As(err, &pgerr) {
if pgerrcode.IsIntegrityConstraintViolation(pgerr.SQLState()) {
return nil, errors.New("related entity does not exist")
}
}
// If we wanted to support another database driver, we'd have to include that here
return nil, errors.New("failed to insert the thing")
}
If I already have to put driver-specific code into my package, why bother accepting the database/sql interface at all? I could instead require the specific driver, which is arguably safer since it prevents the consumer from trying to use some other unsupported driver that we don't have error handling for.
Is there better way to handle specific database/sql errors?
You don't need driver specific code to get SQLState. Example:
func getSQLState(err error) {
type checker interface {
SQLState() string
}
pe := err.(checker)
log.Println("SQLState:", pe.SQLState())
}
But SQLState is a database specific anyway. If you switch to another database/driver in the future then you need to change all error codes manually. Compiler would not help to detect it.
Package sql provides a generic interface around SQL (or SQL-like) databases.
There is a compromise between providing the minimal common set of features, and providing features that would not be available for all implementations. The sql package has prioritized the former, while maybe you prefer more of the latter.
You could argue that every possible implementation should be able to provide a specific error for your example. Maybe that's the case. Maybe not. I don't know.
Either way it is possible for you to wrap pgerrcode.IsIntegrityConstraintViolation inside a function that does this check for every driver that you support. Then it is up to you to decide how to deal with drivers that lacks support.

Safe to use (non-thread-safe) mutableMap in suspend function? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm learning Coroutines in Kotlin and I have a piece of code that looks like this (see below).
My friend says that the mutableMapOf is LinkedHashMap, which is not thread safe. The argument is that the suspend function may be run by different threads, and thus LinkedHashMap is unsuitable.
Is it safe to use a simple mutable map here or is ConcurrentMap needed?
When a suspend function is suspended, can it be resumed and executed by another thread?
Even if (2) is possible, is there "happens-before/ happens-after" guarantee that ensures all the variables (and the underlying object contents) are deep synchronized from main memory before the new thread takes over?
Here's a simplified version of the code:
class CoroutineTest {
private val scope = CoroutineScope(SupervisorJob() + Dispatchers.Default)
suspend fun simpleFunction(): MutableMap<Int,String> {
val myCallResults = mutableMapOf<Int,String>()
val deferredCallResult1 = scope.async {
//make rest call get string back
}
val deferredCallResult2 = scope.async {
//make rest call get string back
}
...
myCallResults.put( 1, deferredCallResult1.await() )
myCallResults.put( 2, deferredCallResult2.await() )
...
return myCallResults
}
}
Thanks in advance!
PS. I ran this code with much more async call results and had no problem; all call results are accounted for. But that can be inconclusive which is why I ask.
Since the map is local to the suspend function, it is safe to use a non-thread-safe implementation. It is possible that different threads will be working with the map between different suspend function calls (in this case the await() calls), but there is guaranteed happens-before/happens-after within the suspend function.
If your map were declared outside the suspend function and accessed via a property, then there could be simultaneous calls to this function and you would be concurrently modifying it, which would be a problem.
No, it is not safe to use a single mutableMapOf() from multiple coroutines.
You understand suspending incorrectly. This is not function that is suspended. The coroutine running in the function could suspend. From this perspective suspending functions aren't really different than normal functions - they could be executed by many coroutines at the same time and all of them will work concurrently.
But... there is nothing wrong with your code for another reason. This mutable map is a local variable, so it is only available to the coroutine/thread that created it. Therefore, it is not accessed concurrently at all. It would be different if the map would be a property of CoroutineTest - then it might mean you need to use ConcurrentMap.
Updated
After reading all comments I believe I have a better understanding of your (or your friend) concerns, so I can provide a more accurate answer.
Yes, after suspending a coroutine it can resume from another thread, so coroutines make possible that some part of a function will be executed by one thread and other part will be executed by another thread. In your example it is possible that put(1 and put(2 will be invoked from two different threads.
However, saying that LinkedHashMap is not thread-safe doesn't mean, that it has to be always accessed by the same thread. It can be accessed by multiple threads, but not at the same time. One thread need to finish performing changes to the map and only then another thread can perform its modifications.
Now, in your code async { } blocks can work in parallel to each other. They can also work in parallel to the outer scope. But the contents of each of them works sequentially. put(2 line can only be executed after put(1 fully finishes, so the map is not accessed by multiple threads at the same time.
As stated earlier, it would be different if the map would be stored e.g. as a property, simpleFunction() would modify it and this function would be invoked multiple times in parallel - then each invocation would try to modify it at the same time. It would be also different if async operations would modify myCallResults directly. As I said, async blocks run in parallel to each other, so they could modify the map at the same time. But since you only return a result from async blocks and then modify the map from a single coroutine (from outer scope), the map is accessed sequentially, not concurrently.

how to handle dependency to a dll dynamically?

I have a dll that hides differences of different ADO.NET providers and has lots of code like:
private static void AppendProviderSpecificParameterCmdStr(StringBuilder sb, DbCommand cmd, string fieldNameToUse, ComparisonOperator oprtr, string parameterName)
{
if (cmd is System.Data.OracleClient.OracleCommand || cmd is Oracle.DataAccess.Client.OracleCommand)
{
sb.AppendFormat("{0}{1}:{2}", fieldNameToUse, GetComparisonOperatorStr(oprtr, cmd), parameterName);
}
else if (cmd is SqlCommand)
{
sb.AppendFormat("{0}{1}#{2}", fieldNameToUse, GetComparisonOperatorStr(oprtr, cmd), parameterName);
}
else if (cmd is OleDbCommand)
{
sb.AppendFormat("{0}{1}?", fieldNameToUse, GetComparisonOperatorStr(oprtr, cmd));
}
else
{
throw new Exception(string.Format("Wrong database command type: {0},", cmd.GetType()));
}
}
where Comparison operator is my own enum.
Oracle.DataAccess is present on all machines that have oracle client and this code have been ok for my needs. However now I've faced a situation where there is only SqlClient and they have no need to have oracle at all. So my code works only if I copy Oracle.DataAccess.dll which is naturally a horrible solution. How this should be done the correct way?
Thanks -matti
I wouldn't call a dependency on dll a horrible solution. Your solution supports Oracle and consequently you have an oracle dll in your solution - it is what it is.
That said, there are things you could do abstract away the command type.
One - create complete data access methods that implement an interface. Your current solution I'd classify as more of a helper or utility method for generic data access. You could instead declare an interface specific instead to a domain - customer for example - like ICustomerDA. In your case you'd have 3 implementations of ICustomerDA.Insert, with the database specifics buried inside. Your main code would only need to know about ICustomerDA. This is probably what I would do in a larger solution as differences and features between RDBMSs go well beyond parameter declaration.
Two - If you wanted to stick with more of the helper/utility idea, you could create an interface for a wrapper for db objects, say IDBCommand. Implementations IDBCommand would hide the underlying command object, and then have specific implementations of an .AppendProviderSpecificParameterCmdStr method which would allow you to do something like:
OracleDbHelper : IDbCommand...
public void AppendProviderSpecificParameterCmdStr(...){
sb.AppendFormat("{0}{1}:{2}", fieldNameToUse, GetComparisonOperatorStr(oprtr, cmd), parameterName);
}
IDBCommand cmd = DAFactory.GetCommand();
cmd.AppendProviderSpecificParameterCmdStr(...
The key to both of these solutions is referencing by a common interface from your main project rather than individual types. Once you did this, you could use reflection in your factory or better yet, something like MEF to create the actual types.
So my code works only if I copy Oracle.DataAccess.dll
Not unless you also have the native OCI DLLs, for example because you have already installed the Oracle Client.
To avoid forcing your users to install the full Oracle Client, you can distribute the DLLs from the Oracle Instant Client together with the application. If user never chooses to connect to Oracle, these DLLs are never called and just sit there quietly without causing any trouble.
For some hints on what to distribute and how to cover both 32-bit and 64-bit, take a look at this post.
We have a home-grown abstraction layer that currently works with Oracle and MS SQL Server (and is portable to any DBMS with a decent ADO.NET provider), and this system has worked quite well so far.

Design best practice - should an object own what is passed to its constructor? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have the following class:
public class SqlCeEventStore: EventStore
{
private EventStoreDB db;
public SqlCeEventStore(EventStoreDB db)
{
this.db = db;
}
public void Dispose()
{
db.Dispose();
}
}
My problem is this: am I correct in disposing the EventStoreDB in the Dispose method of my class, given that it was passed to it in the constructor (and thus, might conceivably be reused after my class is disposed)?
That is, if I dispose it I mandate that the correct usage of my class is:
using (var store = new SqlCeEventStore(new EventStoreDB)){
{
//...
}
but I can see this alternative call being used:
using (var db = new EventStoreDB())
using (var store = new SqlCeEventStore(db))
{
//...
}
in which case I should not dispose of the EventStoreDB from the SqlCeEventStore class.
Are there any arguments for one style or the other? I want to pick one and stick to it, and I'd rather not flip a coin :)
In general there is no rule to this, but yes I would agree that since the object was created outside your scope and was passed to you, you don't own it.
If you had created it, then you should have all rights to do whatever you like to (with documenting the expected behavior for the callers)
This is the classical composition vs aggregation stuff.
If the EventStoreDB is owned by SqlEventStore (ie is part of its composition), it should be constructed by or be merged with the SqlEventStore class.
If it has uses outside the scope of the SqlEventStore lifetime then it should be created and disposed by the external code.
There is no general rule here, and IMHO, there should not be one either. Different objects have different lifespans, and the most general guideline would be to make sure that objects are managed consistently according to their lifespans, and that lifespans are as short as possible.
You could try to use the following as a guideline (but don't be afraid to deviate when you need to): Dispose of an object in the same scope as you allocate it. This guideline is suitable for many scenarios, and it is exactly what the using statement simplifies.
If you have long-lived objects without an obvious disposal point, don't worry. That's normal. However, ask yourself this: Do I really need this object to live for as long as it does? Is there some other way I can model this to make the lifespan shorter? If you can find another way that makes the lifespan shorter, that generally makes the object more manageable, and should be preferred.
But again, there is not any "one true rule" here.
You can not pick one and stick to it. The user can always choose what ever he wants.
However, keep in mind that you are not responsible as a class of disposing objects passed through the constructor.
note
The coming is really silly to discuss because if you want to impose initiation of the class using *new SqlCeEventStore(new EventStoreDB))* then why don't you remove this EventStoreDB parameter and instantiate the variable db inside your constructor.
Workaround
There is a workaround -check this:
public myClass {
//do not make the constructor public //hide it
private myClass(EventStoreDB db){
this.db = db;
}
//make a public constructor that will call the private one in the way you want
public myClass(){
this(myClass(new EventStoreDB()));
}
}
I would suggest that if one can reasonably imagine situations in which the constructed object would be the last thing in the universe that's interested in the passed-in object, as well as situations in which other things will want to keep using the passed-in object after the constructor is done with it, it may be desirable to have a constructor parameter which specifies whether the new object should take ownership of the object that was passed in.
Note that if the constructed object will be taking ownership of the passed-in object, it's important to make certain that object will be disposed even if the constructor throws an exception. One way to do this would be to wrap the constructor call in a routine which will, in a "finally" block, dispose the passed-in object unless the constructor had completed successfully.

How to document the Main method? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Okay, so I've got a .NET console application with it's Main method, contained in a Program class. You know, the usual:
class Program
{
static void Main(string[] args)
{
// Do something spectactular
}
}
Since I've started using StyleCop and FxCop so rigorously, I've become kind of nit-picky about making sure everything is properly documented.
Then it hit me. I have absolutely no idea how to properly document Program and Program.Main.
I suppose, in the long run, that you could go with the following:
/// <summary>
/// Encapsulates the application's main entry point.
/// </summary>
class Program
{
/// <summary>
/// The application's main entry point.
/// </summary>
static void Main(string[] args)
{
// Do something spectactular
}
}
But that seems woefully inadequate (despite the fact that my Main routines always delegate to other classes to do the work).
How do you folks document these things? Is there a recommendation or standard?
In my humble opinion, it's not worth the trouble to document the main function, especially if you are just going to say "The application's main entry point." If someone doesn't know that Main is the application's main entry point, you don't want them anywhere near your code :-)
If you were to put anything there, you might document what the expected or accepted arguments are, but I think there are better places to document program options (like, a usage function that prints usage, the users manual, a readme file, or elsewhere), since that information is useful not only to developers but also to users of the software.
Documentation is there to add something that's not obvious from the code. And tools are there to help you, not to dictate you what should and should not be documented.
"The application's main entry point" does not add anything, so don't write it.
If there's anything non-obvious like parameters, document this.
Add documentation at the class level that describes what the console program actually does, so it's purpose.
In the Main method, document the required arguments etc., unless if you hand that off, 'main entry point' does suffice.
I tend to hand it off to an instance method in Program called Run(string[] args), so in that case, document the Run method with the arguments/switches options.
The body of my Main() method then simply looks like:
Program prog = new Program();
prog.Run(args);
Don't, just don't.
Just look at those 2 samples you created and compare which is more readable?
I'll bet you'll choose the one without comments.