Atomic update not working Spring data solr - spring-data-solr

I have a Spring data solr project, and my repository class is a simple SolrCrudRepository. My question is how to make Spring data solr use atomic update feature of Solr 4. In other words, in order for atomic updates to work, what extra configuration do I need to make, so that Repository.save() works.

Use PartialUpdate along with SolrTemplate.
PartialUpdate update = new PartialUpdate("id", "123456789");
update.setValueOfField("name", "updated-name");
solrTemplate.saveBean(update);
solrTemplate.commit();
Please have a look at ITestSolrTemplate.

SolrInputDocument doc = new SolrInputDocument();
Map<String, String> partialUpdate = new HashMap<>();
partialUpdate.put("set", "value to update");
doc.addField("id", "100");
doc.addField("field name ", partialUpdate);
UpdateRequest up = new UpdateRequest();
up.setBasicAuthCredentials("username", "#password");
up.add(doc);
up.process(solrClient, "corename");
up.commit(solrClient, "corename");

Related

Using ASP.NET Core Web API WITHOUT Entity Framework

I need to build a Web API from ASP.NET Core without Entity Framework. It's an existing database that has some custom stored procedures and we do not want to use EF.
I searched this topic and can't find anything about it, is this even possible?
This is possible.
The first problem you will run into is getting the database connection string. You will want to import the configuration to do so. In a controller, it might look like this:
private readonly IConfiguration _configuration;
public WeatherForecastController(ILogger<WeatherForecastController> logger, IConfiguration configuration)
{
_logger = logger;
_configuration = configuration;
}
Add using System.Data and using System.Data.SqlClient (you'll need NuGet for SqlClient) as well as using Microsoft.Extensions.Configuration. With access to the database, you are writing code "old style", for example:
[HttpGet]
[Route("[controller]/movies")]
public IEnumerable<Movie> GetMovies()
{
List<Movie> movies = new List<Movie>();
string connString = ConfigurationExtensions.GetConnectionString(_configuration, "RazorPagesMovieContext");
SqlConnection conn = new SqlConnection(connString);
conn.Open();
SqlDataAdapter sda = new SqlDataAdapter("SELECT * FROM Movie", conn);
DataSet ds = new DataSet();
sda.Fill(ds);
DataTable dt = ds.Tables[0];
sda.Dispose();
foreach (DataRow dr in dt.Rows)
{
Movie m = new Movie
{
ID = (int)dr["ID"],
Title = dr["Title"].ToString(),
ReleaseDate = (DateTime)dr["ReleaseDate"],
Genre = dr["Genre"].ToString(),
Price = (decimal)dr["Price"],
Rating = dr["Rating"].ToString()
};
movies.Add(m);
}
conn.Close();
return movies.ToArray();
}
The connection string name is in appsettings.json.
"ConnectionStrings": {
"RazorPagesMovieContext": "Server=localhost;Database=Movies;Trusted_Connection=True;MultipleActiveResultSets=true"
}
Yes it is possible. Just implement the API by yourself. Or here is also sample for the identity scaffold, without EF.
https://markjohnson.io/articles/asp-net-core-identity-without-entity-framework/
Just used Dapper as our ORM in a project rather than EF.
https://dapper-tutorial.net/
It is similar to ADO.Net, but it has some additionally features that we leveraged and it was really clean to implement.
I realize this is an old question, but it came up in a search I ran so I figured I'd add to the answers given.
First, if the custom stored procedures are your concern, you can still run them using Entity Framework's .FromSql method (see here for reference: https://www.entityframeworktutorial.net/efcore/working-with-stored-procedure-in-ef-core.aspx)
The relevant info is found at the top of the page:
EF Core provides the following methods to execute a stored procedure:
1. DbSet<TEntity>.FromSql(<sqlcommand>)
2. DbContext.Database.ExecuteSqlCommand(<sqlcommand>)
If you are avoiding Entity Framework for other reasons, it's definitely possible to use any database connection method you want in ASP.NET Core. Just implement your database connection methods using whatever library is relevant to your database and set up your controller to return the data in whatever format you want. Most, if not all, of Microsoft's examples return Entity Framework entities, but you can easily return any data format you want.
As an example, this controller method returns a MemoryStream object after running a query against an MS SQL server (note, in most cases where you want data returned it's my understanding that it should be a "GET" method, not "POST" as is done here, but I needed to send and use information in the HttpPost body)
[HttpPost]
[Route("Query")]
public ActionResult<Stream> Query([FromBody]SqlDto content)
{
return Ok(_msSqlGenericService.Query(content.SqlCommand, content.SqlParameters));
}
Instead of a MemoryStream, you could return a generic DataTable or a List of any custom class you want. Note that you'll also need to determine how you are going to serialize/deserialize your data.

Apache Ignite Continuous Queries : How to get the field names and field values in the listener updates when there are dynamic fields?

I am working on a POC on whether or not we should go ahead with Apache Ignite both for commerical and enterprise use. There is a use case though that we are trying to find an answer for.
Preconditions
Dynamically creation of tables i.e. there may be new fields that come to be put into the cache. Meaning there is no precompiled POJO(Model) defining the attributes of the table/cache.
Use case
I would like to write a SELECT continuous query where it gives me the results that are modified. So I wrote that query but the problem is that when the listener gets a notification, I am not able to find all the field names that are modified from any method call. I would like to be able to get all the field names and field values in some sort of Map, which I can use and then submit to other systems.
You could track all modified field values using binary object and continuous query:
IgniteCache<Integer, BinaryObject> cache = ignite.cache("person").withKeepBinary();
ContinuousQuery<Integer, BinaryObject> query = new ContinuousQuery<>();
query.setLocalListener(events -> {
for (CacheEntryEvent<? extends Integer, ? extends BinaryObject> event : events) {
BinaryType type = ignite.binary().type("Person");
if (event.getOldValue() != null && event.getValue() != null) {
HashMap<String,Object> oldProps = new HashMap<>();
HashMap<String,Object> newProps = new HashMap<>();
for (String field : type.fieldNames()) {
oldProps.put(field,event.getOldValue().field(field));
newProps.put(field,event.getValue().field(field));
}
com.google.common.collect.MapDifference<Object, Object> diff = com.google.common.collect.Maps.difference(oldProps, newProps);
System.out.println(diff.entriesDiffering());
}
}
});
cache.query(query);
cache.put(1, ignite.binary().builder("Person").setField("name","Alice").build());
cache.put(1, ignite.binary().builder("Person").setField("name","Bob").build());

Is there way to pass in object into CmisExtensionElement?

I have a custom aspect, and I'm trying to update it's property through OpenCMIS with CmisExtensionElement.
Currently, I'm able to update custom properties having type String with following codes:
CmisExtensionElement extension = new CmisExtensionElementImpl(namespace, "value", null, String-value);
Question is, how will I able to update custom aspect having property with type datetime, as I'm not able to pass in other than string? (If I convert date object into a string, and pass it on, it throws an error...)
Judging by this : https://chemistry.apache.org/docs/cmis-samples/samples/properties/
You should probably use something like :
Map<String, Object> properties = new HashMap<String, Object>();
properties.put("my:dateVar1", new GregorianCalendar());
// OR
properties.put("my:dateVar2", new Date());
// update
cmisObject.updateProperties(properties);
Here is an example of code provided by Jeff Potts that shows how to do it: https://gist.github.com/jpotts/6136702

Appending instead of overwriting table in BiqQuery API

I currently use bigquery.tabledata().insertAll() to put data into BigQuery. However it overwrites all previous content instead of appending it. Is there a way to change default behaviour or should I use another method to do so?
Code below:
GoogleCredential credential = GoogleCredential.fromStream(...);
if (credential.createScopedRequired()) {
credential = credential.createScoped(BigqueryScopes.all());
}
bigquery = new Bigquery.Builder(new NetHttpTransport(), new GsonFactory(), credential).setApplicationName("Bigquery Samples").build();
TableDataInsertAllRequest.Rows r = new TableDataInsertAllRequest.Rows();
r.setInsertId("123");
ObjectMapper m = new ObjectMapper();
Map<String,Object> props = m.convertValue(person, Map.class);
r.setJson(props);
TableDataInsertAllRequest content =
new TableDataInsertAllRequest().setRows(Arrays.asList(r));
content.setSkipInvalidRows(true);
content.setIgnoreUnknownValues(true);
TableDataInsertAllResponse execute = bigquery.tabledata().insertAll("", "", "", content).execute();
Solution is to assign [globally] unique ID as an InserID.
BigQuery uses InsertId property to detect duplicate insertion requests on a best-effort basis.
If you will ignore this - you might end up with having unwanted duplicate rows!
See more in https://cloud.google.com/bigquery/streaming-data-into-bigquery#dataconsistency
Oh, found the answer.
Inserts with same (if set) id by setInsertId(id) are overridden by next with same id.
Solution: do not set InsertId.
EDIT: see #Mikhail Berlayant response and why you should care about InsertId.

Use lucene index in java application

Recently i stared working on solr. I have created index in solr and i want to query on it through my java application. I don't want to use solr.war in my application. How can i use it through solrj api or lucene java api? My thinking is to add those index in project context and use it. I gone through some examples/tutorials but did not find any on how to work with already created index. Please tell me a proper solution for it or any link specifying the solution will be appreciated.
You can use Lucene apis to create/update and search on an index.
As solr is based on lucene, the underlying index is the lucene index.
Lucene exposes classes as IndexWriter and IndexSearcher, which would help you interact with index.
Example for searching over an solr/lucene index -
Directory index = FSDirectory.open(new File("/path/to/index"));
IndexSearcher searcher = new IndexSearcher(index, true);
TopScoreDocCollector collector = TopScoreDocCollector.create(10, true);
searcher.search(q, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
Should be able to find examples on this.
Yes, you can use a Solr-created index with Lucene, there's nothing particular about it because Solr itself uses Lucene. So all Lucene documentation applies unchanged.
Or if you don't want to use Solr as a server you can use it embedded in your Java application.
I made it this way..
String realPath = request.getRealPath("/");
StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT);
Directory index = FSDirectory.open(new File(realPath+"/index"));
IndexSearcher indexSearcher = new IndexSearcher(index, true);
TopScoreDocCollector collector = TopScoreDocCollector.create(2000, true);
QueryParser query = new QueryParser(Version.LUCENE_CURRENT, "name", analyzer);
Query q = null;
try {
q = query.parse("*:*");
} catch (ParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
indexSearcher.search(q, collector);
ScoreDoc[] scoreDoc = collector.topDocs().scoreDocs;