How to resolve a merge conflict using conflict markers in a single file - markers

I have a merge conflict marked by my source code management system like this. In this case it's quite trivial to resolve it using any editor, but my question is: Which tool can be used to show the "local copy", "common ancestor" and "merged in content" side by side to allow for a more comfortable merge? If there were two or three separate files with the versions, kdiff3, meld, tkdiff or vimdiff would be able to do this, but I haven't found any tool capable of using these quite popular merge conflict markers within a single file.
#Override
public void clientChanged(Client client) {
<<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<<
if (votingResult==null) return; // !!! Prüfen, ob nötig
refresh();
======= COMMON ANCESTOR content follows ============================
refresh();
======= MERGED IN content follows ==================================
if (voting!=null) {
refresh();
}
>>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
}

Related

Migrating from Microsoft.Azure.Storage.Blob to Azure.Storage.Blobs - directory concepts missing

These are great guides for migrating between the different versions of NuGet package:
https://github.com/Azure/azure-sdk-for-net/blob/Azure.Storage.Blobs_12.6.0/sdk/storage/Azure.Storage.Blobs/README.md
https://elcamino.cloud/articles/2020-03-30-azure-storage-blobs-net-sdk-v12-upgrade-guide-and-tips.html
However I am struggling to migrate the following concepts in my code:
// Return if a directory exists:
container.GetDirectoryReference(path).ListBlobs().Any();
where GetDirectoryReference is not understood and there appears to be no direct translation.
Also, the concept of a CloudBlobDirectory does not appear to have made it into Azure.Storage.Blobs e.g.
private static long GetDirectorySize(CloudBlobDirectory directoryBlob) {
long size = 0;
foreach (var blobItem in directoryBlob.ListBlobs()) {
if (blobItem is BlobClient)
size += ((BlobClient) blobItem).GetProperties().Value.ContentLength;
if (blobItem is CloudBlobDirectory)
size += GetDirectorySize((CloudBlobDirectory) blobItem);
}
return size;
}
where CloudBlobDirectory does not appear anywhere in the API.
There's no such thing as physical directories or folders in Azure Blob Storage. The directories you sometimes see are part of the blob (e.g. folder1/folder2/file1.txt). The List Blobs requests allows you to add a prefix and delimiter in a call, which are used by the Azure Portal and Azure Data Explorer to create a visualization of folders. As example prefix folder1/ and delimiter / would allow you to see the content as if folder1 was opened.
That's exactly what happens in your code. The GetDirectoryReference() adds a prefix. The ListBlobs() fires a request and Any() checks if any items return.
For V12 the command that'll allow you to do the same would be GetBlobsByHierarchy and its async version. In your particular case where you only want to know if any blobs exist in the directory a GetBlobs with prefix would also suffice.

Use of Gradle Kotlin DSL Jar.from()

I'm trying to include a single source file for the Main-Class of a jar -- actually I have a toplevel directory of such files, demo/, but I don't want them all in a jar. I want separate jars, each using only one of these.
This seems like sort of an anti-pattern in gradle, as the fundamental mechanism infers or prefers that I should instead place each in a distinct sourceSet. Ugh.
A casual reading of the docs implies Jar.from() might be useful this way: "Specifies the source files or directories..."
As it turns out, "source" is perhaps a bit of a misnomer. Here's an example, a typical kotlin fat jar with the added from("demo/LockingBufferDemo.kt"):
val jar by tasks.getting(Jar::class) {
manifest { attributes["Main-Class"] = "LockingBufferDemoKt" }
from(sourceSets.main.get().output)
from("demo/LockingBufferDemo.kt")
dependsOn(configurations.runtimeClasspath)
from({
configurations.runtimeClasspath.get().filter {
it.name.endsWith("jar") }.map { zipTree(it) }
})
}
Forgive my naivety: Guess what does not end up in the jar? LockingBufferDemo.class. Guess what does? LockingBufferDemo.kt. In other words, this is treated more like a resource, not a source, and what would have been the simplest answer is a dead end.
Another way to approach this would be add the demo directory as an independent sourceSet and then use from(sourceSets["demo"].get(), except I can't find a way to complete that; according to IntelliJ get() returns a rather opaque "Provider" which I can't find mentioned in the actual javadoc: 1, 2 and I really feel like I'm heading down the garden path at this point with the woods rapidly growing darker around me.
This should not be this complicated.
How can I add a single file (or class derived from such) into a jar in gradle without having to put it alone in a directory and create a sourceSet for every such directory?
Regarding your explanations at the start of your post, you should consider creating multiple tasks of type Jar on your own, as every task of type Jar will only create a single JAR-file, and you "want separate jars". I do not think you should use different source sets, as all of the files are Java Kotlin source files in the end and are processed in the same way (compilation, tests, docs ...). Multiple source sets would complicate this common pipeline.
"Specifies the source files or directories..." As it turns out, "source" is perhaps a bit of a misnomer.
Well, the documentation does not stop there, but it says "for a copy and creates a child CopySpec". So it is not the source as in source code, but the source of a copy operation. In Gradle, tasks that create an archive (ZIP, JAR) share their API with tasks that copy files, as the creation of an archive can be seen as copying files from their source location to their target location (inside the archive).
So, the from method can be used to specify the files that are copied / archived. But it does not only take a sourcePath parameter, but also a closure or action for configuration. Using this second parameter, you can narrow your source files or directories down to the one file you need, for example using the method include:
val jar by tasks.getting(Jar::class) {
manifest { attributes["Main-Class"] = "LockingBufferDemoKt" }
from(sourceSets.main.get().output) {
include("**/LockingBufferDemo.class")
}
dependsOn(configurations.runtimeClasspath)
from({
configurations.runtimeClasspath.get().filter {
it.name.endsWith("jar") }.map { zipTree(it) }
})
}

App Folder files not visible after un-install / re-install

I noticed this in the debug environment where I have to do many re-installs in order to test persistent data storage, initial settings, etc... It may not be relevant in production, but I mention this anyway just to inform other developers.
Any files created by an app in its App Folder are not 'visible' to queries after manual un-install / re-install (from IDE, for instance). The same applies to the 'Encoded DriveID' - it is no longer valid.
It is probably 'by design' but it effectively creates 'orphans' in the app folder until manually cleaned by 'drive.google.com > Manage Apps > [yourapp] > Options > Delete hidden app data'. It also creates problem if an app relies on finding of files by metadata, title, ... since these seem to be gone. As I said, not a production problem, but it can create some frustration during development.
Can any of friendly Googlers confirm this? Is there any other way to get to these files after re-install?
Try this approach:
Use requestSync() in onConnected() as:
#Override
public void onConnected(Bundle connectionHint) {
super.onConnected(connectionHint);
Drive.DriveApi.requestSync(getGoogleApiClient()).setResultCallback(syncCallback);
}
Then, in its callback, query the contents of the drive using:
final private ResultCallback<Status> syncCallback = new ResultCallback<Status>() {
#Override
public void onResult(#NonNull Status status) {
if (!status.isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
query = new Query.Builder()
.addFilter(Filters.and(Filters.eq(SearchableField.TITLE, "title"),
Filters.eq(SearchableField.TRASHED, false)))
.build();
Drive.DriveApi.query(getGoogleApiClient(), query)
.setResultCallback(metadataCallback);
}
};
Then, in its callback, if found, retrieve the file using:
final private ResultCallback<DriveApi.MetadataBufferResult> metadataCallback =
new ResultCallback<DriveApi.MetadataBufferResult>() {
#SuppressLint("SetTextI18n")
#Override
public void onResult(#NonNull DriveApi.MetadataBufferResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
MetadataBuffer mdb = result.getMetadataBuffer();
for (Metadata md : mdb) {
Date createdDate = md.getCreatedDate();
DriveId driveId = md.getDriveId();
}
readFromDrive(driveId);
}
};
Job done!
Hope that helps!
It looks like Google Play services has a problem. (https://stackoverflow.com/a/26541831/2228408)
For testing, you can do it by clearing Google Play services data (Settings > Apps > Google Play services > Manage Space > Clear all data).
Or, at this time, you need to implement it by using Drive SDK v2.
I think you are correct that it is by design.
By inspection I have concluded that until an app places data in the AppFolder folder, Drive does not sync down to the device however much to try and hassle it. Therefore it is impossible to check for the existence of AppFolder placed by another device, or a prior implementation. I'd assume that this was to try and create a consistent clean install.
I can see that there are a couple of strategies to work around this:
1) Place dummy data on AppFolder and then sync and recheck.
2) Accept that in the first instance there is the possibility of duplicates, as you cannot access the existing file by definition you will create a new copy, and use custom metadata to come up with a scheme to differentiate like-named files and choose which one you want to keep (essentially implement your conflict merge strategy across the two different files).
I've done the second, I have an update number to compare data from different devices and decide which version I want so decide whether to upload, download or leave alone. As my data is an SQLite DB I also have some code to only sync once updates have settled down and I deliberately consider people updating two devices at once foolish and the results are consistent but undefined as to which will win.

JFrame in remote between JDK 5 (Server) and 6 (Client - VisualVM)

So I have a little trouble on the opening of a JFrame. I searched extensively on the net, but I really can not find a solution ...
I explained the situation:
I need to develop an application that needs to retrieve information tracking application while meeting new safety standards. For that I use JMX that allows monitoring and VisualVM to see these information.
I therefore I connect without problems (recently ^ ^) to JMX since VisualVM.
There is thus in a VisualVM plugin for recovering information on MBean, including those on Methods (Operations tab in the plugin).
This allows among others to stop a service or create an event.
My problem then comes when I try to display a result of statistics.
In fact, I must show, at the click of a button from the list of methods in the "Operations", a window with a table in HTML (titles, colors and everything else).
For that I use a JFrame:
public JFrame displayHTMLJFrame(String HTML, String title){
JFrame fen = new JFrame();
fen.setSize(1000, 800);
fen.setTitle(title);
JEditorPane pan = new JEditorPane();
pan.setEditorKit(new HTMLEditorKit());
pan.setEditable(false);
pan.setText(HTML);
fen.add(pan);
return fen;
}
I call it in my method:
public JFrame displayHtmlSqlStatOK_VM(){
return displayHTMLJFrame(displaySQLStat(sqlStatOK, firstMessageDate), "SqlStatOK");
}
The method must therefore giving me back my JFrame, but she generates an error:
Problem invoking displayHtmlSqlStatOK_VM : java.rmi.UnmarshalException: error unmarshalling return; nested
exception is:
java.io.InvalidClassException: javax.swing.JFrame; local class incompatible: stream classdesc serialVersionUID =
-5208364155946320552, local class serialVersionUID = -2386951414768123374
I saw on the internet that this was a version problem (Serialization), and I believe strongly that it comes from the fact that I have this:
Server - JDK5 <----> Client (VisualVM) - JDK6
Knowing that I can not to change the server version (costs too important ...) as advocated by some sites and forums.
My question is as follows:
Can I display this damn window keeping my current architecture (JDK5 server side and client side JDK6)?
I could maybe force the issue? Tell him that there's nothing bad that can run my code? Finally I'm asking him but he does not answer me maybe to you he will tell you ... (Yes I crack ^^).
Thank you very much to those who read me and help me!
If you need more info do not hesitate.
EDIT
The solution to my problem might be elsewhere, because in fact I just want a table with minimal formatting (this is just for viewing application for an for an officer to have his little table him possibly putting critical data in red...).
But I have nowhere found a list of types that I can return with VisualVM ... This does not however seem to me too much to ask.
After I had thought of a backup solution, which would be to create a temporary HTML file and open it automatically in the browser, but right after that is perhaps not very clean ... But if it can work ^^
I am open to any area of ​​research!
It looks like you are sending instance javax.swing.JFrame over the JMX connection - this is a bad idea.
Well good I found myself, as a great :)
Thank you bye!
..........
Just kidding of course I will give the solution that I found ^ ^
So here's what I did:
My display to be done on the client (normal...) my code to display a JFrame that I had set up on the server was displayed obviously ... On the server xD
I didn't want to change the customer (VisualVM) to allow users maximum flexibility. However I realized that to display my HTML table to be rendered usable (with colors and everything) I had to change the client (as JMX does not support the type JFrame as type back an operation).
My operation running from the MBeans plugin for VisualVM, it was necessary that I find the source code for it to say "Be careful if you see that I give you the HTML you display it in a JFrame".
Here is my approach:
- Get the sources
The link SVN to get sources VisualVM is as follows:
https: //svn.java.net/svn/visualvm~svn/branches/release134
If like me you have trouble with the SVN client includes in NetBeans because you are behind a proxy, you can do it by command line:
svn --config-option servers:global:http-proxy-host=MY_PROXY_HOST --config-option servers:global:http-proxy-port=MY_PROXY_PORT checkout https: //svn.java.net/svn/visualvm~svn/branches/release134 sources-visualvm
Putting you on your destination folder of course (cd C:\Users\me\Documents\SourcesVisualVM example).
- Adding the platform VisualVM
NetBeans needs the platform VisualVM to create modules (plugins) for it. For this, go to "Tools" -> "NetBeans Platforms".
Then click "Add Platform ..." at the bottom left of the window and select the folder to the bin downloaded at this address: http:// visualvm.java.net/download.html
You should have this:
http://img15.hostingpics.net/pics/543268screen1.png
- Adding sources in the workspace (NetBeansProjects)
Copy/paste downloaded sources (SVN from the link above) to your NetBeans workspace (by default in C:\Users\XXX\Documents\NetBeansProjects).
- Ouverture du projet du plugin MBeans
In NetBeans, right click in the Project Explorer (or go to the menu "Files") and click "Open Project ...".
You will then have a list of projects in your workspace.
Open the project "mbeans" found in "release134" -> "Plugins", as below:
http://img15.hostingpics.net/pics/310487screen2.png
- Changing the file "platform.properties"
To build plugin you must define some variables for your platform.
To do this, open the file platform.properties in the directory release134\plugins\nbproject of your workspace.
Replace the content (by changing the paths compared to yours):
cluster.path=\
C:\\Program Files\\java6\\visualvm_134\\platform:\
C:\\Program Files\\java6\\visualvm_134\\profiler
# Deprecated since 5.0u1; for compatibility with 5.0:
disabled.clusters=
nbjdk.active=default
nbplatform.active=VisualVM_1.3.4
suite.dir=${basedir}
harness.dir= C:\\Program Files\\NetBeans 7.1.2\\harness
- Changing the class XMBeanOperations
To add our feature (displaying an HTML table), you must change the class that processes operations, namely the class XMBeanOperations in package com.sun.tools.visualvm . modules.mbeans.
At line 173, replace:
if (entryIf.getReturnType() != null &&
!entryIf.getReturnType().equals(Void.TYPE.getName()) &&
!entryIf.getReturnType().equals(Void.class.getName()))
fireChangedNotification(OPERATION_INVOCATION_EVENT, button, result);
By :
if (entryIf.getReturnType() != null &&
!entryIf.getReturnType().equals(Void.TYPE.getName()) &&
!entryIf.getReturnType().equals(Void.class.getName())) {
if (entryIf.getReturnType() instanceof String) {
String res = result + "";
if (res.indexOf("<html>") != -1) {
JFrame frame = displayHTMLJFrame(res, button.getText());
frame.setVisible(true);
}
else
fireChangedNotification(OPERATION_INVOCATION_EVENT, button, result);
} else
fireChangedNotification(OPERATION_INVOCATION_EVENT, button, result);
}
With the method of creating the JFrame that you place above "void performInvokeRequest (final JButton button)" for example:
// Display a frame with HTML code
public JFrame displayHTMLJFrame(String HTML, String title){
JFrame fen = new JFrame();
fen.setSize(1000, 800);
fen.setTitle(title);
JEditorPane pan = new JEditorPane();
pan.setEditorKit(new HTMLEditorKit());
pan.setEditable(false);
pan.setText(HTML);
fen.add(pan);
return fen;
}
We can see that we already did a test on the return type, if it is a String which is returned, if the case, if we see in this string the balise , then we replace the result of the click by opening a JFrame with the string you put in, what makes us display our HTML code!
- Creating a .nbm
The file .nbm is the deployment file of your plugin. Simply right-click your project (in the Project Explorer) and click on "Create NBM".
Your file .nbm will be created in the folder "build" the root of your project.
- Installing the plugin in VisualVM
To install your plugin, you must just go in VisualVM, go into "Tools" -> "Plugins" tab and then "Downloaded", click "Add Plugins ...". Select your plugin .nbm then click "Install". Then follow the instructions.
Useful Sources
http: //docs.oracle.com/javase/6/docs/technotes/guides/visualvm/
http: //visualvm.java.net/"]http://visualvm.java.net/
http: //visualvm.java.net/api-quickstart.html (Créer un plugin VisualVM avec NetBeans)
Thank you very much for your help Tomas Hurka ;)

Grails - store sql that will be used by services

I am writing a Grails application that will mostly be using the springws web services plugin with endpoints backed by services. The services will retrieve data from a variety of back end databases (i.e., not via domain classes and GORM). I would like to store the sql that my services will be using to fetch the data for the web services in external files.
I'm looking for suggestions on:
Where is the best place to keep the files (i.e., I'd like to put them somewhere obvious like grails-app/sql) and best format (i.e., xml, configslurper, etc.)
Best way to abstract the retrieving of the sql text so my services that will execute the sql will not need to know where or how they are fetched. Services will just provide a sqlid and get the sql.
I was working on a project recently where I needed to do something similar. I created the following directory to store the sql files:
./grails-app/conf/sql
For example there is a file ./grails-app/conf/sql/hr/FIND_PERSON_BY_ID.sql that has something like the following:
select a.id
, a.first_name
, a.last_name
from person
where id = ?
I created a SqlCatalogService class that would load all files in that directory (and subdirectories) and store the filenames (minus extension) and file text in a Map. The service has a get(id) method that returns the sql text that is cached in the Map. Since files/directories stored in grails-app/conf are placed in the classpath, the SqlCatalogService uses the following code to read in the files:
....
....
Map<String,String> sqlCache = [:]
....
....
void loadSqlCache() {
try {
loadSqlCacheFromDirectory(new File(this.class.getResource("/sql/").getFile()))
} catch (Exception ex) {
log.error(ex)
}
}
void loadSqlCacheFromDirectory(File directory) {
log.info "Loading SQL cache from disk using base directory ${directory.name}"
synchronized(sqlCache) {
if(sqlCache.size() == 0) {
try {
directory.eachFileRecurse { sqlFile ->
if(sqlFile.isFile() && sqlFile.name.toUpperCase().endsWith(".SQL")) {
def sqlKey = sqlFile.name.toUpperCase()[0..-5]
sqlCache[sqlKey] = sqlFile.text
log.debug "added SQL [${sqlKey}] to cache"
}
}
} catch (Exception ex) {
log.error(ex)
}
} else {
log.warn "request to load sql cache and cache not empty: size [${sqlCache.size()}]"
}
}
}
String get(String sqlId) {
def sqlKey = sqlId?.toUpperCase()
log.debug "SQL Id requested: ${sqlKey}"
if(!sqlCache[sqlKey]) {
log.debug "SQL [${sqlKey}] not found in cache, loading cache from disk"
loadSqlCache()
}
return sqlCache[sqlKey]
}
Services that use various datasources use the SqlCatalogService to retrieve the sql by calling the get(id) method:
class PersonService {
def hrDataSource
def sqlCatalogService
private static final String SQL_FIND_PERSON_BY_ID = "FIND_PERSON_BY_ID"
Person findPersonById(String personId) {
try {
def sql = new groovy.sql.Sql(hrDataSource)
def row = sql.firstRow(sqlCatalogService.get(SQL_FIND_PERSON_BY_ID), [personId])
row ? new Person(row) : null
} catch (Exception ex) {
log.error ex.message, ex
throw ex
}
}
}
For now we only have a few sql statements so storing all the text in a Map is not an issue. If you lots of sql files to store you may need to think about using something like Ehcache and defining an eviction strategy (i.e., least recently used or least frequently used) and only storing the most used in memory and leaving the rest on disk until needed.
Before doing this I thought about using GORM and storing the sql text in the database. But decided that having the sql in files made it easier to develop with since we could pretty much save the sql to file directly from our sql tool (replacing hard-code params with question marks) and are able to let our revision control system track the changes. I'm not saying the above service is the most efficient or correct way to handle this, but it's worked so far for our needs.
Have you considered using Grails GORM and a HSQLDB database to store the SQL you want executed? You could then put in a record for each service containing that services SQL and retrieve it using normal Grails GORM functions. You could generate a default set of controllers and views that would allow you to edit the SQL. If you want to store the SQL in external files you can create a sub directory in the web-app directory called sql, then store your SQL statements as text files. You could create a class that would take a service name, load the associated text file containing the SQL and return the contents of that file. With out knowing how complex your SQL will be I cant' say what the best format would be. If your dealing with normal select statements with no parameter substitution plain text would be best. If your dealing with more complex SQL with substitutions and multiple queries you may want to use XML.