WebSharingAppDemo-CEProviderEndToEnd Queries peerProvider for NeedsScope before any files are batched to the server. This seems out of order? - wcf

I'm building an application based on the WebSharingAppDemo-CEProviderEndToEnd. When I deploy the server portion on a server, the code gives the error "The path is not valid. Check the directory for the database." during the call to NeedsScope() in the CeWebSyncService.cs file.
Obviously the server can't access the client's sdf but what is supposed to happen to make this work? The app uses batching to send the data and the batches have to be marshalled across to the temp directory but this problem is occurring before any files have been batched over. There is nothing for the server to look at to determine whether the peerProivider needs scope. What am I missing?
public bool NeedsScope()
{
Log("NeedsSchema: {0}", this.peerProvider.Connection.ConnectionString);
SqlCeSyncScopeProvisioning prov = new SqlCeSyncScopeProvisioning();
return !prov.ScopeExists(this.peerProvider.ScopeName, (SqlCeConnection)this.peerProvider.Connection);
}

I noticed that the sample was making use of a proxy to speak w/ the CE file but a provider (not a proxy) to speak w/ the sql server.
I switched it so there is a proxy to reach the SQL server and a provider to access the CE file.
That seems to work for me.
stats = synchronizationHelper.SynchronizeProviders(srcProvider, destinationProxy);
vs.
SyncOperationStatistics stats = syncHelper.SynchronizeProviders(srcProxy, destinationProvider);

Related

Lucee <cfadmin> does not correctly store connectionString property when executing an "updateDatasource" operation

I'm hoping someone can shed some light on this issue. I'm attempting to programmatically add datasources to the Lucee Server context (ie. not on a per-application basis, but rather datasources that are made available to all web contexts on the server). The following call to the tag to create the datasource or later update the same datasource results in the connectionString never being saved correctly.
NOTE: "updateDatasource" will create a datasource if it doesn't already exist.
Host Environment: Windows Server 2019 running Lucee 5.3.8.206 on OpenJDK17.
Database Environment: Windows Server 2019 running SQL Server 2019.
<cfadmin
action="updateDatasource"
type="server"
password="F4K31234"
bundlename="org.lucee.mssql"
bundleversion="8.4.1.jre8"
classname="com.microsoft.sqlserver.jdbc.SQLServerDriver"
dsn="my_new_datasource"
name="my_new_datasource"
newName="my_new_datasource"
connectionString="jdbc:sqlserver://SQLSERVERNAME\MSSQLSERVER2019;DATABASENAME=my_database;sendStringParametersAsUnicode=true;SelectMethod=direct"
dbusername="Temp1234"
dbpassword="F4K31234"
connectionLimit="100"
alwaysSetTimeout="true"
validate="false"
allowed_select="true"
allowed_insert="true"
allowed_update="true"
allowed_delete="true"
allowed_create="true"
allowed_revoke="true"
allowed_alter="true"
allowed_grant="true"
clob="true"
lineTimeout="60">
Every time this operation is attempted, the Connection String is stored as "my_database". In other words, it appears to ignore the string provided in the connectionString attribute and instead stores the database name for the datasource connection string.
These settings are exactly what I use when manually setting up a datasource in the Lucee Server administrative area (minus the obvious fake username, passwords, server names, and database names).
Before I go about filing a bug, I wanted to be sure I'm not missing something here. I appreciate any insight!

ArcGis Offline map layer changes synchronization

In my WPF application I’m trying to use off-line map functionality. Right now my feature service is configured for data sync and I’m able to create data replica on server and download local copy of geodatabase.
gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
Envelope extent = new Envelope(xmin, ymin, xmax, ymax, new SpatialReference(wkidStart));
GenerateGeodatabaseParameters generateParams = await _gdbSyncTask.CreateDefaultGenerateGeodatabaseParametersAsync(extent);
_generateGdbJob = _gdbSyncTask.GenerateGeodatabase(generateParams, _gdbPath);
_generateGdbJob.JobChanged += GenerateGdbJobChanged;
_generateGdbJob.ProgressChanged += ((object sender, EventArgs e) =>
{
UpdateProgressBar();
});
_generateGdbJob.Start();
After initial synchronization, I’m able to successfully work with map in off-line mode. This includes operations like adding new geometries or editing existing polygons inside local DB.
However, when I’m trying to synchronize changes back to server – I’m getting no results.
To perform data synchronization with local database – I’m using the following code:
SyncGeodatabaseParameters parameters = new SyncGeodatabaseParameters()
{
GeodatabaseSyncDirection = SyncDirection.Bidirectional,
RollbackOnFailure = false
};
Geodatabase gdb = await Geodatabase.OpenAsync(this.GetGdbPath());
foreach (GeodatabaseFeatureTable table in gdb.GeodatabaseFeatureTables)
{
long id = table.ServiceLayerId;
SyncLayerOption option = new SyncLayerOption(id);
option.SyncDirection = SyncDirection.Bidirectional;
parameters.LayerOptions.Add(option);
}
_gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
SyncGeodatabaseJob job = _gdbSyncTask.SyncGeodatabase(parameters, gdb);
job.JobChanged += SyncJob_JobChanged;
job.ProgressChanged += SyncJob_ProgressChanged;
job.Start();
Everything goes well. The synchronization ends with status “Succeeded”. The messages logged by the SyncGeodatabaseJob are like on the screen below:
However – when I open edited feature layer from server inside map web client I cannot found any of my local changes. In the serve database I can also see that no new records were created during synchronization.
Interesting think is that when I open “Replica” data inside web I can see the following information:
Replica Server Gen: 2
Creation Date: 2018/02/07 10:49:54 UTC
Last Sync Date: 2018/02/07 10:49:54 UTC
The “Last Sync Data” is equal to replica “Creation date” However, in the replica log in ArcMap I can see the following information:
Can anyone can tell me how should I interpret above described situation? Am I missing some steps in my code? Or maybe some configuration feature is missing on the server? It looks like data modifications are successfully pushed back to replica on server but after that replica is not synchronized with server database (should it work automatically?).
I’m a “fresh” person regarding ArcGis development so any help will be appreciated
Thanks for all the answers. It occurred that there is versioning enabled on the server database and the offline, versioned changes was not reconciled to the server.
After running reconcile/post script (http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/automate-reconcile-post-after-sync.htm) off-line changes started to be visibile to other system users.
The code looks ok on fast look so I would assume that there is something going on in the setup.
What do you get back from the sync operation after the sync has completed? Note that you can just use await syncJob.GetResultsAsync to start the job and wait the results.
How is the Feature Service set up on the server? Please refer https://enterprise.arcgis.com/en/server/latest/publish-services/linux/prepare-data-for-offline-use.htm for the different ways to set these things.

Is it necessary that Data Source of connection string must match the system name

This is my first post to this precious website. I am a new learner of vb.net. I am working on a simple purchase project, where i got some errors. But the first thing is which baffled me is:
This is my connection string at module level, on the developed machine.
Public strCn As String = "Data Source = (local); Initial Catalog = PSys; Integrated Security = false; User ID = sa; Password = 123;"
Is it mandatory that Data Source must be the original name of the System Name. I mean If i use (local) or using ( . ), so will it work or not? Because when i copy my project to any other system for further development so every time i need to change the Data source, otherwise i get the error that: "Network-related or instance-specific error occurred......."
Kindly guide me that what i need to do.
When you are developing an application which uses a database server such as MsSQL it is not wise to install the server along with your application in every pc which is installed to. For example what are you going to do if a customer has a local network with 10 computers? Are you going to install SQL server in all 10 of them? And if so what if they need to share data?
So your best approach (based on common practice by other applications) will be to allow the user to install the SQL server where he wants and let him configure your application and point it to the server's location. If you follow that path then the configuration of your application can be in the setup application or in the application itself.
Now about the development phase, I had a similar situation in which I needed to develop the same application in two different computers. What I did was to install the SQL server in both of them with a named instance "sqlexpress" then in the application I used the
Data.SqlClient.SqlConnectionStringBuilder
class to build the connection string. I did something like this:
Public Function getDevConnectionString() As String
Dim csb As New Data.SqlClient.SqlConnectionStringBuilder(My.Settings.dbConnectionString) '<-My original cs in app settings
csb.DataSource = My.Computer.Name & "\sqlexpress"
Return csb.ConnectionString
End Function
Whenever I need a connection string I simply call getDevConnectionString() which returns the connection string based on the computer name plus the sql server instance name. For example:
Dim cs As String
#If DEBUG Then
cs = getDevConnectionString()
#Else
cs = getReleaseConnectionString()
#End If
where getReleaseConnectionString() is the function that returns your connection string configured by the customer.
Hope this point you the right direction...

web service - web client function will not write to db

I am unable to write any records to my database using a web service. Service is set up OK and can access it via uri and also query my database via the service using a simple page i created.
When it comes to writing to the database, I am not getting any errors, and the instance of my WebClient which is populated with variables to write to the db is holding all the variables OK but when it comes to actually writing to the db (see below code) nothing seems to happen except that the Member ID of the last existing member added to the database is returned.
'assign all abMem fields to values within form to write to database
newMem.Title = ddTitle.SelectedValue
newMem.Initials = txtInitials.Text
newMem.Surname = txtSurname.Text
newMem.Address1 = txtAdd1.Text
newMem.Address2 = txtAdd2.Text
newMem.Address3 = txtAdd3.Text
'etc etc .... additional fields have been removed
Try
cc.Open()
cc.CreateMember(newMem)
returnMem = cc.GetMember(newMem)
MesgBox(returnMem.MemberID & " - Member Created")
cc.Close()
Catch cex As CommunicationException
MesgBox("CommEX - " & cex.Message)
cc.Abort()
Catch tex As TimeoutException
MesgBox("TimeEX - " & tex.Message)
cc.Abort()
Finally
MesgBox("Closed the Client")
End Try
When i run the above, I've noticed in the log file for the service (in the system32 folder on my server) that 2 requests are made each time - presumably one for where I am trying to add a record and the other I would think would be the request for the ID of this member (which isn't created, hence why I believe it it is simply returning the last successful entry in the table).
I know there isn't a problem with the actual web service as there is another user successfully able to add to the db via the service (unfortunately I am unable to simply copy their set-up as they are hitting it via a php page) so i know there is a problem somewhere in my code.
Is cc.CreateMember(newMem) the correct syntax for passing a member's details to the function in the webservice is what I am wondering?
I've re-wrote the code (seems identical to above) and republished the web service. Seems to be working OK now so I must have had some silly mistake somewhere!

Determine request Uri from WCF Data Services LINQ query for FirstOrDefault against Azure without executing it?

Problem
I would like to trace the Uri that will be generated by a LINQ query executed against a Microsoft.WindowsAzure.StorageClient.TableServiceContext object. TableServiceContext just extends System.Data.Services.Client.DataServiceContext with a couple of properties.
The issue I am having is that the query executes fine against our Azure Table Storage instance when we run the web role on a dev machine in debug mode (we are connecting to Azure storage in the cloud not using Dev Storage). I can get the resulting query Uri using Fiddler or just hovering over the statement in the debugger.
However, when we deploy the web role to Azure the query fails against the exact same Azure Table Storage source with a ResourceNotFound DataServiceClientException. We have had ResoureNotFound errors before that dealt with the behavior of FirstOrDefault() on empty tables. This is not the problem here.
As one approach to the problem, I wanted to compare the query Uri that is being generated when the web role is deployed versus when it is running on a dev machine.
Question
Does anyone know a way to get the query Uri for the query that will be sent when the FirstOrDefault() method is called. I know that you can call ToString() on the IQueryable returned from the TableServiceContext but my concern is that when FirstOrDefault() is called the Uri might be further optimized and ToString() on IQueryable might not be what is ultimately sent to the server when FirstOrDefault() is called.
If someone has another approach to the problem I am open to suggestions. It seems to be a general problem with LINQ when trying to determine what will happen when the expression tree is finally evaluated. I am open to suggestions here as well because my LINQ skills could use some improvement.
Sample Code
public void AddSomething(string ProjectID, string Username) {
TableServiceContext context = new TableServiceContext();
var qry = context.Somethings.Where(m => m.RowKey == Username
&& m.PartitionKey == ProjectID);
System.Diagnostics.Trace.TraceInformation(qry.ToString());
// ^ Here I would like to trace the Uri that will be generated
// and sent to the server when the qry.FirstOrDefault() call below is executed.
if (qry.FirstOrDefault() == null) {
// ^ This statement generates an error when the web role is running
// in the fabric
...
}
}
Edit Update and Answer
Steve provided the write answer. Our problem was as exactly described in this post which describes an issue with PartitionKey/RowKey ordering in Single Entity query which was fixed with an update to the Azure OS. This explains the discrepancy between our dev machines and when the web role was deployed to Azure.
When I indicated we had dealt with the ResourceNotFound issue before in our existence checks, we had dealt with it in two ways in our code. One way was using exception handling to deal with the ResourceNotFound error the other way was to put the RowKey first in the LINQ query (as some MS people had indicated was appropriate).
It turns out we have several places where the RowKey was first instead of using the exception handling. We will address this by refactoring our code to target .NET 4 and using the .IgnoreResourceNotFoundException = true property of theTableServiceContext .
Lesson learned (more than once): Don't depend on quirky undocumented behavior.
Aside
We were able to get the query Uri's. They did turn out to be different (as indicated they would be in the blog post). Here are the results:
Query Uri from Dev Fabric
`https://ourproject.table.core.windows.net/Somethings()?$filter=(RowKey eq 'test19#gmail.com') and (PartitionKey eq '41e0c1ae-e74d-458e-8a93-d2972d9ea53c')
Query Uri from Azure Fabric
`https://ourproject.table.core.windows.net/Somethings(RowKey='test19#gmail.com',PartitionKey='41e0c1ae-e74d-458e-8a93-d2972d9ea53c')
I can do one better... I think I know what the problem is. :)
See http://blogs.msdn.com/b/windowsazurestorage/archive/2010/07/26/how-wcf-data-service-changes-in-os-1-4-affects-windows-azure-table-clients.aspx.
Specifically, it used to be the case (in previous Guest OS builds) that if you wrote the query as you did (with the RowKey predicate before the PartitionKey predicate), it resulted in a filter query (while the reverse, PartitionKey preceding RowKey) resulted in the kind of query that raises an exception if the result set is empty.
I think the right fix for you (as indicated in the above blog post) is to set the IgnoreResourceNotFoundException to true on your context.