#Transaction method calling ReactiveCassandraRepository.insert with flux does not rollback all if one element fails - spring-webflux

spring-data-cassandra version 3.4.6
spring-boot version 2.7.7
cassandra version 3.11
DataStax Java driver 3.10.0
We have a Controller method POST /all calling a service class with a transactional method inserting a Flux of simple versioned entities:
Controller
#PostMapping("/all")
#Operation(summary = "Create all data for the request account")
public Flux<SampleCassandraData> create(#RequestBody Flux<SampleCassandraData> datas,
RequestContext context) {
datas = datas.doOnNext(data -> data.setAccountId(context.getAccountId()));
return service.create(datas);
}
Service method
#Transactional
public Flux<SampleCassandraData> create(#NonNull Flux<SampleCassandraData> datas) {
return repository.insert(datas);
}
The entity:
#Table("sample_table")
#EqualsAndHashCode(onlyExplicitlyIncluded = true)
public class SampleCassandraData {
#PrimaryKeyColumn(name = "account_id", type = PARTITIONED)
#EqualsAndHashCode.Include
private String accountId;
#PrimaryKeyColumn(name = "id", type = CLUSTERED, ordering = ASCENDING)
#EqualsAndHashCode.Include
private UUID id;
#Column("content")
private String content;
#Version
private Long version;
}
We have an integration test using cassandra test container inserting an element and then sending an array of 2 elements including the existing one. The expected response status is 409 CONFLICT with an OptimisticLockingFailureException in the logs and no data inserted.
#Test
void when_POST_all_with_duplicated_in_1st_then_CONFLICT() {
val existingData = given_single_created();
datas.set(0, existingData);
val newData = given_new_data();
datas.set(1, newData);
when_create_all(datas);
then_error_json_returned(response, CONFLICT);
then_data_not_saved(newData);
}
Test results:
HttpStatus is 409 CONFLICT as expected
Response json body is our error response as expected
Service logs show the OptimisticLockingFailureException stacktrace as expected
Test fails: newData was saved in Cassandra when we expected the transaction to be fully rollbacked.
What are we doing wrong?
Is there a configuration, annotation field we must set to enable full rollback?
Or is it the reactive nature and we can't expect full rollback?
Thanks in advance

Related

Aggregating requests in WebFlux

Hi I am trying with Spring WebFlux and reactor.
What I want to achieve is to aggregate some requests into one call to a third party API. And then return the results. My code looks like this:
#RestController
#RequiredArgsConstructor
#RequestMapping(value = "/aggregate", produces = MediaType.APPLICATION_JSON_VALUE)
public class AggregationController {
private final WebClient externalApiClient;
//sink that stores the query params
private Sinks.Many<String> sink = Sinks.many().multicast().onBackpressureBuffer();
#GetMapping
public Mono<Mono<Map<String, BigDecimal>>> aggregate(#RequestParam(value = "pricing", required = false) List<String> countryCodes) {
//store the query params in the sink
countryCodes
.forEach(sink::tryEmitNext);
return Mono.from(
sink.asFlux()
.distinct()
.bufferTimeout(5, Duration.ofSeconds(5))
)
.map(countries -> String.join(",", countries))
.map(concatenatedCountries -> invokeExternalPricingApi(concatenatedCountries))
.doOnSuccess(s -> sink.emitComplete((signalType, emitResult) -> emitResult.isSuccess()));
}
private Mono<Map<String, BigDecimal>> invokeExternalPricingApi(String countryCodes) {
return externalApiClient.get().uri(uriBuilder -> uriBuilder
.path("/pricing")
.queryParam("q", countryCodes)
.build())
.accept(MediaType.APPLICATION_JSON)
.retrieve()
.bodyToMono(new ParameterizedTypeReference<>() {
});
}
}
I have the Sinks.Many<String> sink to accumulate the query params from the callers of the API. When there are 5 items or 5 seconds expired, I want to call the third party API. The issue is that when I do two requests like:
GET localhost:8081/aggregate?pricing=CH,EU
GET localhost:8081/aggregate?pricing=AU,NZ,PT
On one request I get the response and on the other I get null. Also, after the first interaction the system stops working as if the sink was broken.
What am I missing here?

While running Karate(1.0.1) tests from Spock, System property that was set in mock ends up undefined in karate.properties['message']

In karate version 0.9.5 I was able to use System.setProperty('message', message) during a mock invocation. Then that property was available inside a feature using karate.properties['message']. I have upgraded to version 1.0.1 and now result of karate.properties['message'] results in undefined
Spock Test code
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class ApiTestRunnerSpec extends Specification {
#LocalServerPort
private int port
#SpringBean
MessageLogger messageLogger = Mock()
def "setup"() {
System.out.println("Running on port: " + port)
System.setProperty("server.port", "" + port)
}
def "Run Mock ApiTest"() {
given:
System.setProperty('foo', 'bar')
when:
Results results = Runner.path("classpath:").tags("~#ignore").parallel(5)
then:
results != null
1 * messageLogger.logMessage(_ as String) >> { String message ->
assert message != null
System.setProperty("message", message)
}
}
}
Controller
#RestController
public class MessageController {
#Autowired private MessageLogger messageLogger;
#GetMapping("/message")
public String message() {
String message = "Important Message";
messageLogger.logMessage(message);
return message;
}
}
MessageLogger
#Component
public class MessageLogger {
public void logMessage(String message) {
System.out.println(message);
}
}
karate-config.js
function fn() {
karate.configure('connectTimeout', 10000);
karate.configure('readTimeout', 10000);
karate.configure('ssl', true);
var config = {
localUrl: 'http://localhost:' + java.lang.System.getProperty('server.port'),
};
print('localUrl::::::::::', config.localUrl);
return config;
}
Feature
#mockMessage
#parallel=true
Feature: Test Message
Background:
* url localUrl
Scenario: GET
Given path '/message'
When method get
Then status 200
* print 'foo value ' + karate.properties['foo']
* print 'message value ' + karate.properties['message']
0.9.5
2021-04-28 15:07:51.819 (...) [print] **foo value bar**
2021-04-28 15:07:51.826 (...) [print] **message value Important Message**
1.0.1
2021-04-28 14:36:58.566 (...) [print] **foo value bar**
2021-04-28 14:36:58.580 (...) [print] **message value undefined**
Link to project on github
I cloned your project and noticed a few outdated things (Groovy, Spock and GMaven+ versions). Upgrading them did not change the outcome, I can still reproduce your problem.
A also noticed that in your two branches the POM differs in more than just the Karate version number, also the dependencies differ. If I use the ones from the 1.0.1 branch, tests do not work under 0.9.5 anymore. So I forked your project and sent you two pull requests for each branch with a dependency setup working identically for both Karate versions. Now the branches really just differ in the Karate version number:
https://github.com/kriegaex/spock-karate-example/compare/karate-0.9.5...kriegaex:karate-1.0.1
BTW, for some reason I had to compile your code running JDK 11, JDK 16 did not work. GMaven+ complained about Java 16 groovy class files (bytecode version 60.0), even though GMaven+ should have used target level 11. No idea what this is about. Anyway, on Java 11 I can reproduce your problem. As the Spock version is identical for both branches, I guess the problem is within Karate itself. I recommend to open an issue there, linking to your GitHub project (after you have accepted my PRs). Spock definitely sets the system property, I have added more log output into the stubbing closure order to verify that. Maybe this is an issue concerning how and when Karate communicates with Spock.
Update: Peter Thomas suggested in his answer to store the value to be transferred to the feature in a Java object and access that one from the feature after the Spock test has set it. I guess, he means something like this:
https://github.com/kriegaex/spock-karate-example/commit/ca88e3da
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class ApiTestRunnerSpec extends Specification {
#LocalServerPort
private int port
#SpringBean
MessageLogger messageLogger = Mock() {
1 * logMessage(_ as String) >> { String message ->
assert message != null
MessageHolder.INSTANCE.message = message
}
}
def "setup"() {
System.out.println("Running on port: " + port)
System.setProperty("server.port", "" + port)
}
def "Run Mock ApiTest"() {
given:
Results results = Runner
.path("classpath:")
.systemProperty("foo", "bar")
.tags("~#ignore")
.parallel(5)
expect:
results
}
static class MessageHolder {
public static final MessageHolder INSTANCE = new MessageHolder()
private String message
private MessageHolder() {}
String getMessage() {
return message
}
void setMessage(String message) {
this.message = message
}
}
}
#mockMessage
#parallel=true
Feature: Test Message
Background:
* url localUrl
Scenario: GET
Given path '/message'
When method get
Then status 200
* print 'foo value ' + karate.properties['foo']
* def getMessage =
"""
function() {
var MessageHolder = Java.type('com.example.spock.karate.ApiTestRunnerSpec.MessageHolder');
return MessageHolder.INSTANCE.getMessage();
}
"""
* def message = call getMessage {}
* print 'message value ' + message
Update 2: This is the implementation of Peter's second idea to simply access Java system properties via JS. So I simplified the working, but unnecessarily complicated version with the message holder singleton, eliminating it again:
https://github.com/kriegaex/spock-karate-example/commit/e235dd71
Now it simply looks like this (similar to the original Spock specification, only refactored to be a bit less verbose):
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class ApiTestRunnerSpec extends Specification {
#LocalServerPort
private int port
#SpringBean
MessageLogger messageLogger = Mock() {
1 * logMessage(_ as String) >> { String message ->
assert message != null
System.setProperty('message', message)
}
}
def "setup"() {
System.out.println("Running on port: " + port)
System.setProperty("server.port", "" + port)
}
def "Run Mock ApiTest"() {
expect:
Runner.path("classpath:").systemProperty("foo", "bar").tags("~#ignore").parallel(5)
}
}
The only important change is in the Karate feature:
#mockMessage
#parallel=true
Feature: Test Message
Background:
* url localUrl
Scenario: GET
Given path '/message'
When method get
Then status 200
* print 'foo value ' + karate.properties['foo']
* def getMessage = function() { return Java.type('java.lang.System').getProperty('message'); }
* print 'message value ' + getMessage()
The Runner "builder" has a .systemProperty() method which is recommended.
Please refer: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#improved-test-suite-builder
So this should work. Else as I said in the comments, please submit a way to replicate.
Results results = Runner.path("classpath:")
.systemProperty("foo", "bar")
.tags("~#ignore").parallel(5)
EDIT: so I can confirm that karate.properties is made immutable at the time the test-suite starts.
So here are the 3 options:
change your test strategy so that all dynamic params are resolved before you start
instead of karate.properties[] do the old-school java.lang.System.getProperty('foo') call in Karate / JS, I'm pretty sure that will work
use a Java singleton to hold shared state for your test-runner and karate-feature

How to catch any exceptions thrown by BigQueryIO.Write and rescue the data which is failed to output?

I want to read data from Cloud Pub/Sub and write it to BigQuery with Cloud Dataflow. Each data contains a table ID where the data itself will be saved.
There are various factors that writing to BigQuery fails:
Table ID format is wrong.
Dataset does not exist.
Dataset does not allow the pipeline to access.
Network failure.
When one of the failures occurs, a streaming job will retry the task and stall. I tried using WriteResult.getFailedInserts() in order to rescue the bad data and avoid stalling, but it did not work well. Is there any good way?
Here is my code:
public class StarterPipeline {
private static final Logger LOG = LoggerFactory.getLogger(StarterPipeline.class);
public class MyData implements Serializable {
String table_id;
}
public interface MyOptions extends PipelineOptions {
#Description("PubSub topic to read from, specified as projects/<project_id>/topics/<topic_id>")
#Validation.Required
ValueProvider<String> getInputTopic();
void setInputTopic(ValueProvider<String> value);
}
public static void main(String[] args) {
MyOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(MyOptions.class);
Pipeline p = Pipeline.create(options);
PCollection<MyData> input = p
.apply("ReadFromPubSub", PubsubIO.readStrings().fromTopic(options.getInputTopic()))
.apply("ParseJSON", MapElements.into(TypeDescriptor.of(MyData.class))
.via((String text) -> new Gson().fromJson(text, MyData.class)));
WriteResult writeResult = input
.apply("WriteToBigQuery", BigQueryIO.<MyData>write()
.to(new SerializableFunction<ValueInSingleWindow<MyData>, TableDestination>() {
#Override
public TableDestination apply(ValueInSingleWindow<MyData> input) {
MyData myData = input.getValue();
return new TableDestination(myData.table_id, null);
}
})
.withSchema(new TableSchema().setFields(new ArrayList<TableFieldSchema>() {{
add(new TableFieldSchema().setName("table_id").setType("STRING"));
}}))
.withFormatFunction(new SerializableFunction<MyData, TableRow>() {
#Override
public TableRow apply(MyData myData) {
return new TableRow().set("table_id", myData.table_id);
}
})
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.withFailedInsertRetryPolicy(InsertRetryPolicy.neverRetry()));
writeResult.getFailedInserts()
.apply("LogFailedData", ParDo.of(new DoFn<TableRow, TableRow>() {
#ProcessElement
public void processElement(ProcessContext c) {
TableRow row = c.element();
LOG.info(row.get("table_id").toString());
}
}));
p.run();
}
}
There is no easy way to catch exceptions when writing to output in a pipeline definition. I suppose you could do it by writing a custom PTransform for BigQuery. However, there is no way to do it natively in Apache Beam. I also recommend against this because it undermines Cloud Dataflow's automatic retry functionality.
In your code example, you have the failed insert retry policy set to never retry. You can set the policy to always retry. This is only effective during something like an intermittent network failure (4th bullet point).
.withFailedInsertRetryPolicy(InsertRetryPolicy.alwaysRetry())
If the table ID format is incorrect (1st bullet point), then the CREATE_IF_NEEDED create disposition configuration should allow the Dataflow job to automatically create a new table without error, even if the table ID is incorrect.
If the dataset does not exist or there is an access permission issue to the dataset (2nd and 3rd bullet points), then my opinion is that the streaming job should stall and ultimately fail. There is no way to proceed under any circumstances without manual intervention.

#SpringBootTest Service Transaction not rolling back if #Test is #Transactional

I am using spring boot and my integration test class is like following
#RunWith(SpringRunner.class)
#SpringBootTest(classes = IWantServiceRequestApp.class)
public class ServiceRequestResourceIntTest {
...
}
in the following test case i am updating "ServiceRequest" entity with address and passing invalid stateId
#Test
#Transactional
public void updateWithNonExistingState() throws Exception {
UpdateServiceRequestDTO updateServiceRequestDTO = serviceRequestMapper.toUpdateServiceRequestDTO(updatedServiceRequest);
updateWithDefaultUpdateValues(updateServiceRequestDTO); // update properties of updateServiceRequestDTO object
defaultUpdateAddressDTO.setStateId(NON_EXISTING_ID);
updateServiceRequestDTO.setServiceLocation(defaultUpdateAddressDTO);
String payload = TestUtil.convertObjectToJsonString(updateServiceRequestDTO);
restServiceRequestMockMvc.perform(put("/api/servicerequest")
.contentType(TestUtil.APPLICATION_JSON_UTF8)
.content(payload))
.andExpect(status().isNotFound())
.andExpect(jsonPath("$.code").value(ResponseCodes.STATE_NOT_FOUND))
; // this calls ServiceRequestServiceImpl.update()
// Validate the ServiceRequest in the database
List<ServiceRequest> serviceRequests = serviceRequestRepository.findAll();
assertThat(serviceRequests).hasSize(databaseSizeBeforeUpdate);
ServiceRequest testServiceRequest = serviceRequests.get(serviceRequests.size() - 1);
assertNull(testServiceRequest.getServiceLocation());
assertThat(serviceRequest.getType()).isEqualTo(DEFAULT_TYPE); // **FAILS HERE**
}
when i run the above test case, it will throw Exception from service layer and #ControllerAdvice convert exception to proper rest response body.
the problem however is that, in service layer the entity object will be updated with new field values from request, and then the exception is thrown, and when asserted DB for non updated field values, it gives updated values from DB.
#Service
#Transactional
public class ServiceRequestServiceImpl implements ServiceRequestService{
public ServiceRequestDTO update(UpdateServiceRequestDTO updateServiceRequestDTO) {
ServiceRequest serviceRequest = serviceRequestRepository.findOne(updateServiceRequestDTO.getId());
serviceRequestMapper.updateServiceRequest(updateServiceRequestDTO, serviceRequest);
// above method call will update fields object and then throw Exception, but the updated fields should not reflect in DB as the transaction will be rolled back due to exception
serviceRequest = serviceRequestRepository.save(serviceRequest);
ServiceRequestDTO result = serviceRequestMapper.toServiceRequestDTO(serviceRequest);
return result;
}
}
My thought is that, spring treats the #Transactional on #Test as the actual Transaction and does not rollback the Transaction declared on #Service, if that is the case then I would like to rollback the Service transaction only.

Spring Data Rest Content Type

I am writing unit tests for my application with Spring Data Rest MongoDB. Based on Josh's "Building REST services with Spring" get start guide, I have the following test code:
#Test
public void readSingleAccount() throws Exception {
mockMvc.perform(get("/accounts/"
+ this.account.getId()))
.andExpect(status().isOk())
.andExpect(content().contentType(contentType))
.andExpect(jsonPath("$.id", is(this.account.getId())))
.andExpect(jsonPath("$.email", is(this.account.getEmail())))
.andExpect(jsonPath("$.password", is(this.account.getPassword())));
}
And this test fails on the content type.
Content type expected:<application/json;charset=UTF-8> but was: <application/hal+json>
Expected :application/json;charset=UTF-8
Actual :application/hal+json
I don't see MediaType come with HAL. Is the content type defined in another class?
Had the same Problem when not using tomcat (which is configured to return utf-8 using Spring Boot). The solution is to set the accept header in your GET request so the response gets the correct content type:
private MediaType contentType = new MediaType("application", "hal+json", Charset.forName("UTF-8"));
and in your request, do
#Test
public void readSingleAccount() throws Exception {
mockMvc.perform(get("/accounts/"
+ this.account.getId()).**accept(contentType)**)
.andExpect(status().isOk())
.andExpect(content().contentType(contentType))
.andExpect(jsonPath("$.id", is(this.account.getId())))
.andExpect(jsonPath("$.email", is(this.account.getEmail())))
.andExpect(jsonPath("$.password", is(this.account.getPassword())));
}