Setting data in the Cloned Seismic Cube - ocean

I am quite new to Ocean Framework. I had cloned a pre-existing seismic cube to create a new seismic cube .
// Getting the Parent Cube
SeismicCube ParentCube = InputSeismicLine3D.SeismicCube;
// Getting the Seismic Collection
SeismicCollection Sc = ParentCube.SeismicCollection;
//
if (Sc.CanCreateSeismicCube(ParentCube))
{
SeismicCube NewCube = Sc.CreateSeismicCube(ParentCube, ParentCube.Template);
}
Can anyone tell me how to set the trace data in the "NewCube".
Thanks in advance.

SeismicCube has Traces property.
From the SDK:
Writing is only possible for cubes, which return IsWritable as true and the cube is locked in a transaction. Trace value changes (like trace[12] = 123.0) are flushed automatically at regular intervals when you advance to the next trace with enumerator.MoveNext(). The value range is recomputed when you finish iterating (MoveNext() returns false).
In your example:
if (Sc.CanCreateSeismicCube(ParentCube))
{
SeismicCube NewCube = Sc.CreateSeismicCube(ParentCube, ParentCube.Template);
if (!NewCube.IsWriteable)
return;
using (ITransaction trans = DataManager.NewTransaction()) {
trans.Lock(NewCube);
foreach (ITrace trace in NewCube.Traces)
{
//Do some setting of trace values here. Example only:
for (int i = 0; i < trace.Length; i++)
{
trace[i] = trace.I + trace.J + i;
}
}
trans.Commit();
}
}

Related

Hibernate Search manual indexing throw a "org.hibernate.TransientObjectException: The instance was not associated with this session"

I use Hibernate Search 5.11 on my Spring Boot 2 application, allowing to make full text research.
This librairy require to index documents.
When my app is launched, I try to re-index manually data of an indexed entity (MyEntity.class) each five minutes (for specific reason, due to my server context).
I try to index data of the MyEntity.class.
MyEntity.class has a property attachedFiles, which is an hashset, filled with a join #OneToMany(), with lazy loading mode enabled :
#OneToMany(mappedBy = "myEntity", cascade = CascadeType.ALL, orphanRemoval = true)
private Set<AttachedFile> attachedFiles = new HashSet<>();
I code the required indexing process, but an exception is thrown on "fullTextSession.index(result);" when attachedFiles property of a given entity is filled with one or more items :
org.hibernate.TransientObjectException: The instance was not associated with this session
The debug mode indicates a message like "Unable to load [...]" on entity hashset value in this case.
And if the HashSet is empty (not null, only empty), no exception is thrown.
My indexing method :
private void indexDocumentsByEntityIds(List<Long> ids) {
final int BATCH_SIZE = 128;
Session session = entityManager.unwrap(Session.class);
FullTextSession fullTextSession = Search.getFullTextSession(session);
fullTextSession.setFlushMode(FlushMode.MANUAL);
fullTextSession.setCacheMode(CacheMode.IGNORE);
CriteriaBuilder builder = session.getCriteriaBuilder();
CriteriaQuery<MyEntity> criteria = builder.createQuery(MyEntity.class);
Root<MyEntity> root = criteria.from(MyEntity.class);
criteria.select(root).where(root.get("id").in(ids));
TypedQuery<MyEntity> query = fullTextSession.createQuery(criteria);
List<MyEntity> results = query.getResultList();
int index = 0;
for (MyEntity result : results) {
index++;
try {
fullTextSession.index(result); //index each element
if (index % BATCH_SIZE == 0 || index == ids.size()) {
fullTextSession.flushToIndexes(); //apply changes to indexes
fullTextSession.clear(); //free memory since the queue is processed
}
} catch (TransientObjectException toEx) {
LOGGER.info(toEx.getMessage());
throw toEx;
}
}
}
Does someone have an idea ?
Thanks !
This is probably caused by the "clear" call you have in your loop.
In essence, what you're doing is:
load all entities to reindex into the session
index one batch of entities
remove all entities from the session (fullTextSession.clear())
try to index the next batch of entities, even though they are not in the session anymore... ?
What you need to do is to only load each batch of entities after the session clearing, so that you're sure they are still in the session when you index them.
There's an example of how to do this in the documentation, using a scroll and an appropriate batch size: https://docs.jboss.org/hibernate/search/5.11/reference/en-US/html_single/#search-batchindex-flushtoindexes
Alternatively, you can just split your ID list in smaller lists of 128 elements, and for each of these lists, run a query to get the corresponding entities, reindex all these 128 entities, then flush and clear.
Thanks for the explanations #yrodiere, they helped me a lot !
I chose your alternative solution :
Alternatively, you can just split your ID list in smaller lists of 128 elements, and for each of these lists, run a query to get the corresponding entities, reindex all these 128 entities, then flush and clear.
...and everything works perfectly !
Well seen !
See the code solution below :
private List<List<Object>> splitList(List<Object> list, int subListSize) {
List<List<Object>> splittedList = new ArrayList<>();
if (!CollectionUtils.isEmpty(list)) {
int i = 0;
int nbItems = list.size();
while (i < nbItems) {
int maxLastSubListIndex = i + subListSize;
int lastSubListIndex = (maxLastSubListIndex > nbItems) ? nbItems : maxLastSubListIndex;
List<Object> subList = list.subList(i, lastSubListIndex);
splittedList.add(subList);
i = lastSubListIndex;
}
}
return splittedList;
}
private void indexDocumentsByEntityIds(Class<Object> clazz, String entityIdPropertyName, List<Object> ids) {
Session session = entityManager.unwrap(Session.class);
List<List<Object>> splittedIdsLists = splitList(ids, 128);
for (List<Object> splittedIds : splittedIdsLists) {
FullTextSession fullTextSession = Search.getFullTextSession(session);
fullTextSession.setFlushMode(FlushMode.MANUAL);
fullTextSession.setCacheMode(CacheMode.IGNORE);
Transaction transaction = fullTextSession.beginTransaction();
CriteriaBuilder builder = session.getCriteriaBuilder();
CriteriaQuery<Object> criteria = builder.createQuery(clazz);
Root<Object> root = criteria.from(clazz);
criteria.select(root).where(root.get(entityIdPropertyName).in(splittedIds));
TypedQuery<Object> query = fullTextSession.createQuery(criteria);
List<Object> results = query.getResultList();
int index = 0;
for (Object result : results) {
index++;
try {
fullTextSession.index(result); //index each element
if (index == splittedIds.size()) {
fullTextSession.flushToIndexes(); //apply changes to indexes
fullTextSession.clear(); //free memory since the queue is processed
}
} catch (TransientObjectException toEx) {
LOGGER.info(toEx.getMessage());
throw toEx;
}
}
transaction.commit();
}
}

Update context in SQL Server from ASP.NET Core 2.2

_context.Update(v) ;
_context.SaveChanges();
When I use this code then SQL Server adds a new record instead of updating the
current context
[HttpPost]
public IActionResult PageVote(List<string> Sar)
{
string name_voter = ViewBag.getValue = TempData["Namevalue"];
int count = 0;
foreach (var item in Sar)
{
count = count + 1;
}
if (count == 6)
{
Vote v = new Vote()
{
VoteSarparast1 = Sar[0],
VoteSarparast2 = Sar[1],
VoteSarparast3 = Sar[2],
VoteSarparast4 = Sar[3],
VoteSarparast5 = Sar[4],
VoteSarparast6 = Sar[5],
};
var voter = _context.Votes.FirstOrDefault(u => u.Voter == name_voter && u.IsVoted == true);
if (voter == null)
{
v.IsVoted = true;
v.Voter = name_voter;
_context.Add(v);
_context.SaveChanges();
ViewBag.Greeting = "رای شما با موفقیت ثبت شد";
return RedirectToAction(nameof(end));
}
v.IsVoted = true;
v.Voter = name_voter;
_context.Update(v);
_context.SaveChanges();
return RedirectToAction(nameof(end));
}
else
{
return View(_context.Applicants.ToList());
}
}
You need to tell the DbContext about your entity. If you do var vote = new Vote() vote has no Id. The DbContext see this and thinks you want to Add a new entity, so it simply does that. The DbContext tracks all the entities that you load from it, but since this is just a new instance, it has no idea about it.
To actually perform an update, you have two options:
1 - Load the Vote from the database in some way; If you get an Id, use that to find it.
// Loads the current vote by its id (or whatever other field..)
var existingVote = context.Votes.Single(p => p.Id == id_from_param);
// Perform the changes you want..
existingVote.SomeField = "NewValue";
// Then call save normally.
context.SaveChanges();
2 - Or if you don't want to load it from Db, you have to manually tell the DbContext what to do:
// create a new "vote"...
var vote = new Vote
{
// Since it's an update, you must have the Id somehow.. so you must set it manually
Id = id_from_param,
// do the changes you want. Be careful, because this can cause data loss!
SomeField = "NewValue"
};
// This is you telling the DbContext: Hey, I control this entity.
// I know it exists in the DB and it's modified
context.Entry(vote).State = EntityState.Modified;
// Then call save normally.
context.SaveChanges();
Either of those two approaches should fix your issue, but I suggest you read a little bit more about how Entity Framework works. This is crucial for the success (and performance) of your apps. Especially option 2 above can cause many many issues. There's a reason why the DbContext keep track of entities, so you don't have to. It's very complicated and things can go south fast.
Some links for you:
ChangeTracker in Entity Framework Core
Working with Disconnected Entity Graph in Entity Framework Core

Deleting many managed objects selectd by fragment name

I want to delete many managed objects, selected by fragment type. There are more then 2000 elements in it. Unfortunately I can not delete all with one function call. I have to call this function many times until I have deleted all. How can I delete a list of managed objects in a sufficient way? Not defining page size did not help...
This is my current function:
InventoryFilter filter = new InventoryFilter();
filter.byFragmentType("xy_fragment");
ManagedObjectCollection moc = inventoryApi.getManagedObjectsByFilter(filter);
int count = 0;
// max page size is 2000
for (ManagedObjectRepresentation mo : moc.get(2000).allPages()) {
if (mo.get("c8y_IsBinary") != null) {
binariesApi.deleteFile(mo.getId());
} else {
inventoryApi.delete(mo.getId());
}
LOG.debug(count + " remove: " + mo.getName() + ", " + mo.getType());
count++;
}
LOG.info("all objectes removed, count:" + count);
By calling moc.get(2000).allPages() you already obtain an iterator that queries following pages on demand as you iterate over it.
The problem you are facing is caused by deleting elements from the same list you are iterating over. You delete element from the first page, but once the second page is queried from the server it does not contain the expected elements anymore because you already deleted the first page. Now all elements are shifted forward by your page size.
You can avoid all of that by making a local copy of all elements you want to delete first:
List<ManagedObjectRepresentation> allObjects = Lists.newArrayList( moc.get(2000).allPages())
for (ManagedObjectRepresentation mo : allObjects) {
//delete here
}
There is no bulk delete allowed on the inventory API so your method of looping through the objects is the correct approach.
A bulk delete is already a dangerous tool on the other APIs but on the inventory API it would give you the potential to accidentally delete all your data with just one call (as all data associated with a managedObject is also deleted upon the deletion of the managedObject).
That is why it is not available.
I solved the problem by calling the method until no elements can be found any more. It is not nice but I have no other idea.
public synchronized void removeManagedObjects(String deviceTypeKey) {
int count = 0;
do {
count = deleteManagedObjectes(deviceTypeKey);
}while(count > 0);
}
private int deleteManagedObjectes(String deviceTypeKey) {
InventoryFilter filter = new InventoryFilter();
filter.byFragmentType("xy_fragment");
ManagedObjectCollection moc = inventoryApi.getManagedObjectsByFilter(filter);
int count = 0;
if(moc == null) {
LOG.info("ManagedObjectCollection are NULL");
return count;
}
for (ManagedObjectRepresentation mo : moc.get(2000).allPages()) {
if (mo.get("c8y_IsBinary") != null) {
binariesApi.deleteFile(mo.getId());
} else {
inventoryApi.delete(mo.getId());
}
LOG.debug(count + " remove: " + mo.getName() + ", " + mo.getType());
count++;
}
LOG.info("all objectes removed, count:" + count);
return count;
}

Insert 1000000 documents into RavenDB

I want to insert 1000000 documents into RavenDB.
class Program
{
private static string serverName;
private static string databaseName;
private static DocumentStore documentstore;
private static IDocumentSession _session;
static void Main(string[] args)
{
Console.WriteLine("Start...");
serverName = ConfigurationManager.AppSettings["ServerName"];
databaseName = ConfigurationManager.AppSettings["Database"];
documentstore = new DocumentStore { Url = serverName };
documentstore.Initialize();
Console.WriteLine("Initial Databse...");
_session = documentstore.OpenSession(databaseName);
for (int i = 0; i < 1000000; i++)
{
var person = new Person()
{
Fname = "Meysam" + i,
Lname = " Savameri" + i,
Bdate = DateTime.Now,
Salary = 6001 + i,
Address = "BITS provides one foreground and three background priority levels that" +
"you can use to prioritize transBfer jobs. Higher priority jobs preempt"+
"lower priority jobs. Jobs at the same priority level share transfer time,"+
"which prevents a large job from blocking small jobs in the transfer"+
"queue. Lower priority jobs do not receive transfer time until all the "+
"higher priority jobs are complete or in an error state. Background"+
"transfers are optimal because BITS uses idle network bandwidth to"+
"transfer the files. BITS increases or decreases the rate at which files "+
"are transferred based on the amount of idle network bandwidth that is"+
"available. If a network application begins to consume more bandwidth,"+
"BITS decreases its transfer rate to preserve the user's interactive"+
"experience. BITS supports multiple foreground jobs and one background"+
"transfer job at the same time.",
Email = "Meysam" + i + "#hotmail.com",
};
_session.Store(person);
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("Count:" + i);
Console.ForegroundColor = ConsoleColor.White;
}
Console.WriteLine("Commit...");
_session.SaveChanges();
documentstore.Dispose();
_session.Dispose();
Console.WriteLine("Complete...");
Console.ReadLine();
}
}
but session doesn't save changes, I get an error:
An unhandled exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll
A document session is intended to handle a small number of requests. Instead, experiment with inserting in batches of 1024. After that, dispose the session and create a new one. The reason you get an OutOfMemoryException is because the document session caches all constituent objects to provide a unit of work, which is why you should dispose of the session after inserting a batch.
A neat way to do this is with the use of a Batch linq extension:
foreach (var batch in Enumerable.Range(1, 1000000)
.Select(i => new Person { /* set properties */ })
.Batch(1024))
{
using (var session = documentstore.OpenSession())
{
foreach (var person in batch)
{
session.Store(person);
}
session.SaveChanges();
}
}
The implementations of both Enumerable.Range and Batch are lazy and don't keep all the objects in memory.
RavenDB also has a bulk API that does a similar thing without the need for additional LINQ extensions:
using (var bulkInsert = store.BulkInsert())
{
for (int i = 0; i < 1000 * 1000; i++)
{
bulkInsert.Store(new User
{
Name = "Users #" + i
});
}
}
Note .SaveChanges() isn't called and will be called either when a batch size is reached (defined in the BulkInsert() if needed), or when the bulkInsert is disposed of.

How do I mock this value using Rhino Mocks

Here is the method I'm trying to test:
public override void CalculateReductionOnYield()
{
log.LogEnter();
if (illus.RpFundStreams.Count <= 0)
{
throw new InvalidDataException("No regular premium fund streams which are required in order to calculate reduction on yield");
}
// Add the individual ReductionOnYield classes to the collection.)
foreach (RegularPremiumFundStream fs in illus.RpFundStreams)
{
foreach (int i in ReductionOnYieldMonths)
{
ReductionOnYield roy = new ReductionOnYield(i);
roy.FundStream = fs;
ReductionsOnYield.Add(roy);
}
foreach (ReductionOnYield redOnYield in ReductionsOnYield)
{
if (redOnYield.Month == 0 || illus.RegularPremiumInPlanCurrency == 0M)
{
redOnYield.Reduction = 0M;
}
else
{
double[] regPremiums = new double[redOnYield.Month + 1];
for (int i = 1; i <= redOnYield.Month; i++)
{
regPremiums[i - 1] = Convert.ToDouble(-1*redOnYield.FundStream.FundStreamMonths[i].ValRegularPremium);
}
regPremiums[redOnYield.Month] = Convert.ToDouble(redOnYield.FundStream.GetFundStreamValue(redOnYield.Month));
redOnYield.Reduction = Convert.ToDecimal(Math.Pow((1 + Financial.IRR(ref regPremiums, 0.001D)), 12) - 1);
}
}
}
How do I mock all the required classes to test the value of redOnYield.Reduction to make sure that it working properly?
e.g. how do I mock redOnYield.FundStream.GetFundStreamValue(redOnYield.Month) and redOnYield.FundStream.FundStreamMonths[i].ValRegularPremium ?
Is this a valid test? Or am I going about this the wrong way?
without more info on your objects its hard to say, but you want something like:
var fundStream = MockRepository.GenerateStub<TFundStream>();
fundStream.Stub(f => f.GetFundStreamValue(60)).Return(220000M);
var redOnYeild = MockRepository.GenerateStub<TRedOnYeild>();
redOnYeild.Stub(r => r.FundStream).Return(fundStream);
redOnYield is an object returned from iterating ReductionsOnYield. I don't see where this is coming from. If we assume it's a virtual property, then you'll want to create a collection of mock ReductionOnYield objects and stub out ReductionsOnYield to return your mocked collection (or, to make it easier to test, have CalculateReductionOnYield accept an IEnumerable and operate on that collection).
Once you get the ReductionsOnYield issue resolved, Andrew's response of stubbing out the properties will get you where you want to be. Of course, this assumes that FundStream is virtual (so it can be mocked/stubbed) as well as RegularPremiumFundStream's GetFundStreamValue and FundStreamMonths.