#SpringBootTest Service Transaction not rolling back if #Test is #Transactional - testing

I am using spring boot and my integration test class is like following
#RunWith(SpringRunner.class)
#SpringBootTest(classes = IWantServiceRequestApp.class)
public class ServiceRequestResourceIntTest {
...
}
in the following test case i am updating "ServiceRequest" entity with address and passing invalid stateId
#Test
#Transactional
public void updateWithNonExistingState() throws Exception {
UpdateServiceRequestDTO updateServiceRequestDTO = serviceRequestMapper.toUpdateServiceRequestDTO(updatedServiceRequest);
updateWithDefaultUpdateValues(updateServiceRequestDTO); // update properties of updateServiceRequestDTO object
defaultUpdateAddressDTO.setStateId(NON_EXISTING_ID);
updateServiceRequestDTO.setServiceLocation(defaultUpdateAddressDTO);
String payload = TestUtil.convertObjectToJsonString(updateServiceRequestDTO);
restServiceRequestMockMvc.perform(put("/api/servicerequest")
.contentType(TestUtil.APPLICATION_JSON_UTF8)
.content(payload))
.andExpect(status().isNotFound())
.andExpect(jsonPath("$.code").value(ResponseCodes.STATE_NOT_FOUND))
; // this calls ServiceRequestServiceImpl.update()
// Validate the ServiceRequest in the database
List<ServiceRequest> serviceRequests = serviceRequestRepository.findAll();
assertThat(serviceRequests).hasSize(databaseSizeBeforeUpdate);
ServiceRequest testServiceRequest = serviceRequests.get(serviceRequests.size() - 1);
assertNull(testServiceRequest.getServiceLocation());
assertThat(serviceRequest.getType()).isEqualTo(DEFAULT_TYPE); // **FAILS HERE**
}
when i run the above test case, it will throw Exception from service layer and #ControllerAdvice convert exception to proper rest response body.
the problem however is that, in service layer the entity object will be updated with new field values from request, and then the exception is thrown, and when asserted DB for non updated field values, it gives updated values from DB.
#Service
#Transactional
public class ServiceRequestServiceImpl implements ServiceRequestService{
public ServiceRequestDTO update(UpdateServiceRequestDTO updateServiceRequestDTO) {
ServiceRequest serviceRequest = serviceRequestRepository.findOne(updateServiceRequestDTO.getId());
serviceRequestMapper.updateServiceRequest(updateServiceRequestDTO, serviceRequest);
// above method call will update fields object and then throw Exception, but the updated fields should not reflect in DB as the transaction will be rolled back due to exception
serviceRequest = serviceRequestRepository.save(serviceRequest);
ServiceRequestDTO result = serviceRequestMapper.toServiceRequestDTO(serviceRequest);
return result;
}
}
My thought is that, spring treats the #Transactional on #Test as the actual Transaction and does not rollback the Transaction declared on #Service, if that is the case then I would like to rollback the Service transaction only.

Related

#Transaction method calling ReactiveCassandraRepository.insert with flux does not rollback all if one element fails

spring-data-cassandra version 3.4.6
spring-boot version 2.7.7
cassandra version 3.11
DataStax Java driver 3.10.0
We have a Controller method POST /all calling a service class with a transactional method inserting a Flux of simple versioned entities:
Controller
#PostMapping("/all")
#Operation(summary = "Create all data for the request account")
public Flux<SampleCassandraData> create(#RequestBody Flux<SampleCassandraData> datas,
RequestContext context) {
datas = datas.doOnNext(data -> data.setAccountId(context.getAccountId()));
return service.create(datas);
}
Service method
#Transactional
public Flux<SampleCassandraData> create(#NonNull Flux<SampleCassandraData> datas) {
return repository.insert(datas);
}
The entity:
#Table("sample_table")
#EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class SampleCassandraData {
#PrimaryKeyColumn(name = "account_id", type = PARTITIONED)
#EqualsAndHashCode.Include
private String accountId;
#PrimaryKeyColumn(name = "id", type = CLUSTERED, ordering = ASCENDING)
#EqualsAndHashCode.Include
private UUID id;
#Column("content")
private String content;
#Version
private Long version;
}
We have an integration test using cassandra test container inserting an element and then sending an array of 2 elements including the existing one. The expected response status is 409 CONFLICT with an OptimisticLockingFailureException in the logs and no data inserted.
#Test
void when_POST_all_with_duplicated_in_1st_then_CONFLICT() {
val existingData = given_single_created();
datas.set(0, existingData);
val newData = given_new_data();
datas.set(1, newData);
when_create_all(datas);
then_error_json_returned(response, CONFLICT);
then_data_not_saved(newData);
}
Test results:
HttpStatus is 409 CONFLICT as expected
Response json body is our error response as expected
Service logs show the OptimisticLockingFailureException stacktrace as expected
Test fails: newData was saved in Cassandra when we expected the transaction to be fully rollbacked.
What are we doing wrong?
Is there a configuration, annotation field we must set to enable full rollback?
Or is it the reactive nature and we can't expect full rollback?
Thanks in advance

JdbcTemplate separate transactions created for each query

I'm using jdbc for some sql queries and i wanted to execute all separate queries in one method in one transaction. I tried to set configuration setting only for transaction in one query and read it in another:
#Transactional
public void testJDBC() {
SqlRowSet rowSet =jdbcTemplate.queryForRowSet("select set_config('transaction_test','im_here',true)");
String result;
while (rowSet.next()) {
result = rowSet.getString("set_config");
System.out.println("Result1: "+result);
}
SqlRowSet rowSet2 =jdbcTemplate.queryForRowSet("select current_setting('transaction_test',true)");
String result2;
while (rowSet2.next()) {
result2 = rowSet2.getString("current_setting");
System.out.println("Result2: "+result2);
}
}
But my second query uses other transaction or both queries are not transactional, becouse result looks like this:
Result1: im_here
Result2:
I dont get it what is wrong here that despite Transactional annotation it is still not transactional.
Here are my beans setting:
#Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory emf) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(emf);
return transactionManager;
}
public BasicDataSource getApacheDataSource(){
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(environment.getRequiredProperty("jdbc.driverClassName"));
dataSource.setUrl(getUrl());
dataSource.setUsername(getEnvironmentProperty("spring.datasource.username"));
dataSource.setPassword(getEnvironmentProperty("spring.datasource.password"));
}
#Bean
public JdbcTemplateExtended jdbc(){
return new JdbcTemplateExtended(getApacheDataSource());
}
I think making sure #Transactional annotations are being handled well is the first step in troubleshooting. To do this, add the following settings to application.properties (or application.yml file). I assume you are using spring boot.
logging:
level:
org:
springframework:
transaction:
interceptor: trace
If you run the logic after applying the above settings, you can see the following log message.
2020-10-02 14:45:07,162 TRACE - Getting transaction for [com.Class.method]
2020-10-02 14:45:07,273 TRACE - Completing transaction for [com.Class.method]
Make sure the #Transactional annotation is handled properly by the TransactionInterceptor.
Note: The behavior of the #Transactional annotation works on proxy objects. If you call from a method of the same class or create a class directly instead of autowired, the proxy object is not created and hence the #Transactional annotation's expected behavior is not applied.

How to catch any exceptions thrown by BigQueryIO.Write and rescue the data which is failed to output?

I want to read data from Cloud Pub/Sub and write it to BigQuery with Cloud Dataflow. Each data contains a table ID where the data itself will be saved.
There are various factors that writing to BigQuery fails:
Table ID format is wrong.
Dataset does not exist.
Dataset does not allow the pipeline to access.
Network failure.
When one of the failures occurs, a streaming job will retry the task and stall. I tried using WriteResult.getFailedInserts() in order to rescue the bad data and avoid stalling, but it did not work well. Is there any good way?
Here is my code:
public class StarterPipeline {
private static final Logger LOG = LoggerFactory.getLogger(StarterPipeline.class);
public class MyData implements Serializable {
String table_id;
}
public interface MyOptions extends PipelineOptions {
#Description("PubSub topic to read from, specified as projects/<project_id>/topics/<topic_id>")
#Validation.Required
ValueProvider<String> getInputTopic();
void setInputTopic(ValueProvider<String> value);
}
public static void main(String[] args) {
MyOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(MyOptions.class);
Pipeline p = Pipeline.create(options);
PCollection<MyData> input = p
.apply("ReadFromPubSub", PubsubIO.readStrings().fromTopic(options.getInputTopic()))
.apply("ParseJSON", MapElements.into(TypeDescriptor.of(MyData.class))
.via((String text) -> new Gson().fromJson(text, MyData.class)));
WriteResult writeResult = input
.apply("WriteToBigQuery", BigQueryIO.<MyData>write()
.to(new SerializableFunction<ValueInSingleWindow<MyData>, TableDestination>() {
#Override
public TableDestination apply(ValueInSingleWindow<MyData> input) {
MyData myData = input.getValue();
return new TableDestination(myData.table_id, null);
}
})
.withSchema(new TableSchema().setFields(new ArrayList<TableFieldSchema>() {{
add(new TableFieldSchema().setName("table_id").setType("STRING"));
}}))
.withFormatFunction(new SerializableFunction<MyData, TableRow>() {
#Override
public TableRow apply(MyData myData) {
return new TableRow().set("table_id", myData.table_id);
}
})
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.withFailedInsertRetryPolicy(InsertRetryPolicy.neverRetry()));
writeResult.getFailedInserts()
.apply("LogFailedData", ParDo.of(new DoFn<TableRow, TableRow>() {
#ProcessElement
public void processElement(ProcessContext c) {
TableRow row = c.element();
LOG.info(row.get("table_id").toString());
}
}));
p.run();
}
}
There is no easy way to catch exceptions when writing to output in a pipeline definition. I suppose you could do it by writing a custom PTransform for BigQuery. However, there is no way to do it natively in Apache Beam. I also recommend against this because it undermines Cloud Dataflow's automatic retry functionality.
In your code example, you have the failed insert retry policy set to never retry. You can set the policy to always retry. This is only effective during something like an intermittent network failure (4th bullet point).
.withFailedInsertRetryPolicy(InsertRetryPolicy.alwaysRetry())
If the table ID format is incorrect (1st bullet point), then the CREATE_IF_NEEDED create disposition configuration should allow the Dataflow job to automatically create a new table without error, even if the table ID is incorrect.
If the dataset does not exist or there is an access permission issue to the dataset (2nd and 3rd bullet points), then my opinion is that the streaming job should stall and ultimately fail. There is no way to proceed under any circumstances without manual intervention.

In-memory H2 database, insert not working in SpringBootTest

I have a SpringBootApplicationWhich I wish to test.
Below are the details about my files
application.properties
PRODUCT_DATABASE_PASSWORD=
PRODUCT_DATABASE_USERNAME=sa
PRODUCT_DATABASE_CONNECTION_URL=jdbc:h2:file:./target/db/testdb
PRODUCT_DATABASE_DRIVER=org.h2.Driver
RED_SHIFT_DATABASE_PASSWORD=
RED_SHIFT_DATABASE_USERNAME=sa
RED_SHIFT_DATABASE_CONNECTION_URL=jdbc:h2:file:./target/db/testdb
RED_SHIFT_DATABASE_DRIVER=org.h2.Driver
spring.datasource.platform=h2
ConfigurationClass
#SpringBootConfiguration
#SpringBootApplication
#Import({ProductDataAccessConfig.class, RedShiftDataAccessConfig.class})
public class TestConfig {
}
Main Test Class
#RunWith(SpringJUnit4ClassRunner.class)
#SpringBootTest(classes = {TestConfig.class,ConfigFileApplicationContextInitializer.class}, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public class MainTest {
#Autowired(required = true)
#Qualifier("dataSourceRedShift")
private DataSource dataSource;
#Test
public void testHourlyBlock() throws Exception {
insertDataIntoDb(); //data sucessfully inserted
SpringApplication.run(Application.class, new String[]{}); //No data found
}
}
Data Access In Application.class;
try (Connection conn = dataSourceRedShift.getConnection();
Statement stmt = conn.createStatement() {
//access inserted data
}
Please Help!
PS for the spring boot application the test beans are being picked so bean instantiation definitely not a problem. I think I am missing some properties.
I do not use hibernate in my application and data goes off even within the same application context (child context). i.e. I run a spring boot application which reads that data inserted earlier
Problem solved.
removing spring.datasource.platform=h2 from the application.properties.
Made my h2 data persists.
But I still wish to know how is h2 starting automatically?

Handle NHibernate Load fail

I have the following method.
public Foo GetById(int id)
{
ISession session = GetCurrentSession();
try
{
return session.Load<Foo>(id);
}
catch(NHibernate.ObjectNotFoundException onfe)
{
throw(onfe);
}
}
Unfortunately, onfe is never thrown. I want to handle the case that I only get
back a proxy because no adequate row exists in database.
I suggest you write your own ObjectNotFoundException and rewrite the method as:
public Foo GetById(int id)
{
Foo foo;
ISession session = GetCurrentSession();
foo = session.Get<Foo>(id);
if (foo == null)
{
throw new ObjectNotFoundException(string.Format("Foo with id '{0}' not found.", id));
}
}
There are two problems with your method as written:
You should use Get to load an entity by its key.
Your exception handling is wrapping the original exception and re-throwing for no reason.
If an entity is allowed to be lazy-loaded then Load method returns uninitialized proxy. The ObjectNotFoundException is thrown as soon as the proxy is about to be initialized.
Get method should be prefered when you are not sure that a requested entity exists.
See:
Nhibernate error: NO row with given identifier found error
,
https://sites.google.com/a/thedevinfo.com/thedevinfo/Home/or-persistence/hibernate/hibernate-faq, etc...