Well, I have a requirement now that requires Apache Ignite SQL. When creating a table, it is similar to setting the primary key to grow automatically in Mysql. When the Apache Ignite creates the table, set the primary key to grow automatically?
There is no autoincrement in Ignite SQL. But you can implement a custom SQL function, that generates IDs, based on IgniteAtomicSequence:
public class SqlFunc {
#QuerySqlFunction
public static long nextId() {
Ignite ignite = Ignition.ignite();
IgniteAtomicSequence seq = ignite.atomicSequence("seq", 0, true);
return seq.getAndIncrement();
}
}
Here is cache a configuration, that allows to use nextId() function in SQL:
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache"/>
<property name="sqlFunctionClasses" value="com.example.SqlFunc"/>
<property name="sqlSchema" value="PUBLIC"/>
</bean>
More on custom SQL functions: https://apacheignite-sql.readme.io/docs/custom-sql-functions
UPD:
Note, that every time IgniteAtomicSequence reserves a range of ids, an internal transaction is started. It may lead to unexpected consequences like deadlocks, if explicit transactions are used.
So, this approach should be used with care. In particular, SQL queries, that use the nextId() function, shouldn't be run within transactions.
Related
as https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions says, Hive supports some limited ACID transactions. SO, if I just need row-level transactions, is Hive enough? Is HBase's advantages become less and less?
Thanks.
It is possible to do ACID transactions in HBase with Apache Phoenix, a layer for HBase which provides an SQL interface for handling data.
To use transactions, after installing Phoenix you set the property phoenix.transactions.enabled to true in your hbase-site.xml , then use the TRANSACTIONAL option when you create your table. For example:
CREATE TABLE my_table (id INTEGER PRIMARY KEY, val VARCHAR) TRANSACTIONAL=true;
Following that you simply interact with your table normally, with SQL through JDBC or another interface. (Note you can also alter an existing non-transactional table to be transactional.)
For more, you can read about Phoenix and its transaction support at the project's website:
https://phoenix.apache.org/transactions.html
Any one can tell me is there any Time based Trigger Policy available in Apache Ignite?
I Have an Object Having Expiry Date When That Date(Time-stamp) Expire I want to update This value and override it in Cache is it possible in Apache Ignite
Thanks in advance
You can configure a time-based expiration policy in Apache Ignite with eager TTL: Expiry Policies. This way objects will be eagerly expired from cache after a certain time.
Then you can subscribe a javax.cache.event.CacheEntryExpiredListener, which will be triggered after every expiration, and update the cache from that listener. However, it looks like there will be a small window when the entry will have been already expired from a cache and before you put and updated value into cache.
If the above window is not acceptable to you, then you can simply query all entries from a cache periodically and update all the entries that are older than a certain expiration time. In this case you would have to ensure that all entries have a timestamp filed, which will be indexed and used in SQL queries. Something like this:
SELECT * from SOME_TYPE where timestamp > 2;
More on SQL queries here: Distributed Queries, Local Queries.
Maybe like this:
cache.withExpiryPolicy(new CreatedExpiryPolicy(new Duration(TimeUnit.SECONDS, 123))).put(k, v);
The expiration will be applied only to this entry.
For trigger try continuous queries: apacheignite.readme.io/docs/continuous-queries
I recently came across this problem -- my transactional method (marked as #Transactional) failed to do the rollback when I use native SQL. If I instead use Hibernate API to save an entity, it did roll back!
Background
I have a class Employee() and I have a table tbl_employee. I want to use native SQL to insert rows to tbl_employee, and each insertion will insert 500 rows at maximum. For example, if I have 600 employees to insert, I will create two SQL insert queries (one for the first 500 and the other for the next 100). My insert method looks like this (to make it simple, I only consider insertion of (500,1000) rows):
#Transactional
public void insert(List<Employee> list) throws RuntimeException{
Dao dao = new Dao();
String sql1 = buildSQL (list.subList(0,500));//the first 500. buildSQL is a simple method which build an insert query for the list
String sql2 = buiildSQL (list.subList(500,list.size()));//the remaining
dao.executeSQL(sql1);//dao performs the sql query to the database
dao.executeSQL(sql2);
}
And here is the spring configuration. I use HibernateTransactionManager.
<bean id="txManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory">
<ref local="mySessionFactory"/>
</property>
</bean>
My Dao() class performs my sql to the database and if there is an exception during insertion, I will throw a RuntimeException.
The problem
I tried to insert 501 employees to an empty table. The first 500 have unique ids and the last one has the same id as the first one. Therefore when I insert the last one, there will be a "DUPLICATE PRIMARY KEY" error. I expect that the previous insert() method will rollback -- No rows should be inserted to the table. However, I noticed that the first 500 got inserted to the database!
I then tried to use Hibernate API. My insert method becomes:
#Transactional
public void insert(List<Employee> list) throws RuntimeException{
Dao dao = new Dao();
for (Employee employee: list)
dao.save(employee);
}
And it actually did what I expect! No rows are inserted to the database once there is an exception.
My questions
I wonder if spring/hibernate support transaction management for native SQL.
The rollback works if I directly "save" an entity using Hibernate.The reason why I want to use native sql is that I actually need to insert employees to different tables based on, say, the insertion datetime. (e.g the new emloyees this year will be inserted to tbl_employee_2015).Therefore it's easier to map to different tables using native SQL. Is there any other elegant way to handle mapping to different tables?
I am assuming that one sql for multiple row insertion is very efficient. Therefore I want to insert 500 rows by one sql. I don't want to do Hibernate save one by one in a loop. Is this assumption correct?
Thank you so much for your attention!
I have two entities with many-to-many relationship defined on them.
<set name="TreasuryCodes" table="ServiceProviderAccountTreasuryCode" lazy="true" cascade="all">
<key column="ServiceProviderAccountId" />
<many-to-many column="TreasuryCodeId" class="TreasuryCode" />
</set>
<set name="ServiceProviderAccounts" table="ServiceProviderAccountTreasuryCode" lazy="true" inverse="true" cascade="all">
<key column="TreasuryCodeId" />
<many-to-many column="ServiceProviderAccountId" class="ServiceProviderAccount" />
</set>
Now I want to delete all ServiceProviderAccounts by ServiceProviderId. I write this code:
public void DeleteAllAccount(int serviceProviderId)
{
const string query = "delete ServiceProviderAccount spa where spa.ServiceProvider.Id = :serviceProviderId";
repository.Session.CreateQuery(query)
.SetInt32("serviceProviderId", serviceProviderId)
.ExecuteUpdate();
repository.Session.Flush();
}
and I receive this exception:
Test method Test.ServiceRepositoryTest.DeleteAllAccountTest threw exception:
NHibernate.Exceptions.GenericADOException: could not execute update query[SQL: delete from ServiceProviderAccount where ServiceProviderId=?] ---> System.Data.SqlClient.SqlException: The DELETE statement conflicted with the REFERENCE constraint "FKBC88A84CB684BF79". The conflict occurred in database "Test", table "dbo.ServiceProviderAccountTreasuryCode", column 'ServiceProviderAccountId'.
The statement has been terminated.
I'm confused, as I have defined cascade on the entity, shouldn't nhibernate remove rows from ServiceProviderAccountTreasuryCode?
UPDATE
ok, looks like ExecuteUpdate is not looking for NHibernate cascade, probably because it's not loading entities before deleting it? Anyway is there any other way to delete from ServiceProviderAccountTreasuryCode table and then from ServiceProviderAccounts via HQL? I know I can use cascades on database, but I want to avoid that. What I want is to delete rows from many-to-many association table by HQl. Is it possible? Or I should use plain SQL?
looks like you have a referential integrity problem i.e a foregin key relation ship where the id that you are deleting is being referenced somewhere else and that table will end up referencing nothing. if that is what you want to do then you can run the Truncate command but I am not sure why you will do that..
I would suggest you do a normal delete i.e using the nhibernate session and Linq like below:
foreach(var sessionProvider in Session.Linq<ServiceProviderAccount >().Where(x=>x.ServiceProvider.Id==servinceProviderId))
Session.Delete(sessionProvider);
Now note this is not at all a bad way to do your deletion as they are not fired against the dB immediately and is part of the Session till your transaction is committed and this should handle your referential integrity problems if your mappings are defined crrectly.
Hope this works..
Looks like it doesn't obey the cascading. HQL batch operations for update/delete is relatively new, and translate more or less directly to SQL. I believe that you must keep track of the related tables as well.
If you only delete single entities then the batch-delete doesn't do you much good. In order for NHibernate to actually take cascading into account, it must load the actual entitity, which you don't with your example.
I asked a similar question, the answer I got might interest you
Remove entity in NHibernate only by primary key
How do I configure NHibernate to create the db schema with a column like this:
create_dt datetime not null default getdate()
I have this in the mapping file:
<property name="create_dt" update="false" insert="false" generated="insert" not-null="true" />
Is there anyway I can inject the sql server specific default getdate(). The documentation for generated properties even mentions this is how you handle a create_date field. I'm just not sure how to make my db schema generate properly. Will I have to edit the create table scripts manually?
Similar question.
EDIT: I figured out I can always change the table schema like so:
<database-object>
<create>ALTER TABLE Report ADD CONSTRAINT DF_report_create_dt DEFAULT getdate() FOR create_dt;</create>
<drop></drop>
</database-object>
and I could add a trigger in the same way for an update_dt type of field. This seems better than supplying explicit insert and update statements that use getdate().
I alway prefer to use the NHibernate Event system to set my audit properties like created date or update date. (See event system documentation here).
I prefer this approach because it keeps the logic out of my database layer but also it gives me the ability to have a single location in my code that is responsible for setting these values. And if I have a common base class for all my entities then I can even guarantee consistent behavior throughout my domain.
this is an answer on a thread for Hibernate... it should port over to nHibernate without changing it...
https://forum.hibernate.org/viewtopic.php?f=25&t=996901&view=previous
please see the last post.
Failing that, i always generate the "date created" of an object in the constructor of the class:
public class MyClass
{
private DateTime createdDate;
public MyClass()
{
createdDate = DateTime.Now;
}
}